Automata and Fixpoints for Asynchronous Hyperproperties
aa r X i v : . [ c s . L O ] O c t JENS OLIVER GUTSFELD,
Westfälische Wilhelms Universität Münster, Germany
MARKUS MÜLLER-OLM,
Westfälische Wilhelms Universität Münster, Germany
CHRISTOPH OHREM,
Westfälische Wilhelms Universität Münster, GermanyHyperproperties have received increasing attention in the last decade due to their importance e.g. for securityanalyses. Past approaches have focussed on synchronous analyses, i.e. techniques in which different pathsare compared lockstepwise. In this paper, we systematically study asynchronous analyses for hyperpropertiesby introducing both a novel automata model (Alternating Asynchronous Parity Automata) and the temporalfixpoint calculus 𝐻 𝜇 , the first fixpoint calculus that can systematically express hyperproperties in an asyn-chronous manner and at the same time subsumes the existing logic HyperLTL. We show that the expressivepower of both models coincides over fixed path assignments. The high expressive power of both models isevidenced by the fact that decision problems of interest are highly undecidable, i.e. not even arithmetical. Asa remedy, we propose approximative analyses for both models that also induce natural decidable fragments.Additional Key Words and Phrases: Logics, Automata, Hyperproperties Hyperproperties [Clarkson and Schneider 2010] are a recent innovation in theoretical computerscience. While a traditional trace property (like liveness or safety) refers to single traces, a hy-perproperty refers to sets of traces . Hyperproperties of interest include security properties likenon-interference or observational determinism since it can only be inferred from combinationsof traces and their relation to each other whether a system fulfills these properties. Analysismethods for hyperproperties have been proposed in many contexts, including abstract interpre-tation [Mastroeni and Pasqua 2017, 2018], runtime verification [Finkbeiner et al. 2019], synthesis[Finkbeiner et al. 2020] and model checking [Clarkson et al. 2014; Finkbeiner et al. 2015; Gutsfeld et al.2020; Rabe 2016]. In model checking, several temporal logics for hyperproperties have been pro-posed, including hyperized variants of LTL [Clarkson et al. 2014; Finkbeiner et al. 2015; Rabe 2016],CTL ∗ [Clarkson et al. 2014; Finkbeiner et al. 2015; Rabe 2016], QPTL [Coenen et al. 2019; Rabe2016] and PDL − Δ [Gutsfeld et al. 2020]. In all these logics, specifications are synchronous , i.e.the modalities only allow for lockstepwise traversal of different paths. The same is true of theautomata-theoretic frameworks underlying the algorithms for these logics. However, the restric-tion to synchronous traversal of traces is a conceptual limitation of existing approaches [Finkbeiner2017] that seems rather artificial. Arguably, the ability to accommodate asynchronous specificationsis an important requirement for hyperproperty verification in various scenarios since many inter-esting properties require comparing traces at different points of time. For instance, synchronousformulations of information flow security properties such as non-interference are often too strictfor system abstractions with varying granularity of steps. Here, a proper security analysis requiresasynchronicity in order to match points of interest. Asynchronous hyperproperties also arise nat-urally in the context of multithreaded programs where each thread is represented by a single traceas the overall system behaviour is an asynchronous interleaving of the individual traces.In order to investigate the boundaries of automatic analysis of asynchronous hyperpropertiesinduced by undecidability and complexity limitations, we propose both an automata-theoretic Authors’ addresses: Jens Oliver Gutsfeld, Institut für Informatik, Westfälische Wilhelms Universität Münster, Einsteinstraße62, Münster, North Rhine-Westphalia, 48149, Germany, [email protected]; Markus Müller-Olm, Institut fürInformatik, Westfälische Wilhelms Universität Münster, Einsteinstraße 62, Münster, North Rhine-Westphalia, 48149, Ger-many, [email protected]; Christoph Ohrem, Institut für Informatik, Westfälische Wilhelms Univer-sität Münster, Einsteinstraße 62, Münster, North Rhine-Westphalia, 48149, Germany, [email protected]. :2 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem framework, Alternating Asynchronous Parity Automata (AAPA), and a temporal fixpoint calcu-lus, 𝐻 𝜇 , for asynchronous hyperproperties in this paper. Our contribution is threefold: first of all,we show that both perspectives indeed coincide over fixed sets of paths by providing a direct andintuitive translation between AAPA and 𝐻 𝜇 formulas. Secondly, using this correspondence, wehighlight the limitations of the analysis of asynchronous hyperproperties by showing that majorproblems of interest for both models (model checking, satisfiability, automata emptiness) are noteven arithmetical. Thus, these problems are not only undecidable, but also exhaustive approxima-tion analyses are impossible as they require recursive enumerability. Finally, we consider naturalsemantic restrictions – 𝑘 -synchronicity and 𝑘 -context-boundedness – of both models that giverise to families of increasingly precise over- and underapproximate analyses. Also, we identify set-tings where these analyses yield precise results. We provide precise completeness results for allbut one of the corresponding decision problems. Our complexity results for restricted classes ofAAPA also shed new light on the classical theory of multitape automata over finite words as boththe restrictions and the proofs can be directly transferred.The rest of the paper is structured as follows: in Section 2, we provide some basic notation andrecall the definitions of Alternating Parity Automata and Regular Transducers. Then, in Section 3,we introduce Alternating Asynchronous Parity Automata (AAPA) as a model for the asynchro-nous analysis of multiple input words and study their closure and decidability properties. As theemptiness problem is undecidable, we discuss approximate analyses which lead to decidabilityfor corresponding fragments. We introduce 𝐻 𝜇 as a novel fixpoint logic for hyperproperties inSection 4. Section 5 establishes the connection between AAPA and 𝐻 𝜇 . In Section 6 and Section 7,this connection is used to transfer the approximate analyses of AAPA to 𝐻 𝜇 and to obtain tightcomplexity bounds for corresponding decision problems. We summarise the paper in Section 8.Due to lack of space, we have transferred some proofs to the appendix. Related work:
Hyperproperties were systematically considered in [Clarkson and Schneider2010]. The temporal logics HyperLTL and HyperCTL ∗ were introduced in [Clarkson et al. 2014]and efficient algorithms for them were developed in [Finkbeiner et al. 2015]. The polyadic 𝜇 -calculus[Andersen 1994] is directly related to hyperproperties. It extends the modal 𝜇 -calculus by branch-ing over tuples of states instead of single states and has recently been considered in the contextof so-called incremental hyperproperties [Milushev and Clarke 2013]. This logic can express prop-erties that are not expressible in HyperLTL and vice versa [Rabe 2016]. The same relation holdsbetween the polyadic 𝜇 -calculus and 𝐻 𝜇 : On the one hand, HyperLTL can be embedded into 𝐻 𝜇 trivially, and on the other hand, 𝐻 𝜇 is a linear time logic, while the polyadic 𝜇 -calculus is a branch-ing time logic, implying the logics are expressively incomparable. The polyadic 𝜇 -calculus waslater reinvented [Lange 2015] under the name higher-dimensional 𝜇 -calculus [Otto 1999] and itwas shown that every bisimulation-invariant property of finite graphs that can be decided in poly-nomial time can be expressed in it.A different class of logics with the ability to express hyperproperties are the first- and second-order logics with equal-level predicate MPL[E], MSO[E], FOL[<,E] and S1S[E] [Coenen et al. 2019;Finkbeiner 2017; Spelten et al. 2011]. We believe that 𝐻 𝜇 can be embedded into the most powerfulof these logics, S1S[E] and MSO[E]. Since MPL[E] and MSO[E] are branching time logics while 𝐻 𝜇 is a linear time logic, just like FOL[<,E] and S1S[E], we restrict our further analysis to the relation-ship between 𝐻 𝜇 and these latter two logics. We believe that the expressive power of FOL[<,E] and 𝐻 𝜇 is incomparable: As for HyperLTL [Bozzelli et al. 2015], the property that an atomic propositiondoes not occur on a certain level in the tree (of traces) – which is directly expressible in FOL[<,E] –likely is not expressible in 𝐻 𝜇 . On the other hand, for singleton trace sets, 𝐻 𝜇 and FOL[<,E] degen-erate to the linear time 𝜇 -calculus and FOL[<], respectively, and it is known that FOL[<] - unlikethe linear time 𝜇 -calculus - cannot express all 𝜔 -regular properties. Notwithstanding S1S[E], we :3 think the study of 𝐻 𝜇 is of interest because (i) it is closer to logics traditionally used in modelchecking and (ii) there is no obvious way to define decidable approximate analyses for S1S[E] aswe do for 𝐻 𝜇 . Indeed, all results concerning S1S[E] we are aware of are undecidability results.The logic 𝐻 𝜇 proposed in the current paper is based on the linear-time 𝜇 -calculus [Vardi 1988]and our model checking algorithms use a construction based on alternating parity word automatawith holes in the flavour of [Lange 2005] while handling quantifiers via the constructions forHyperCTL ∗ from [Finkbeiner et al. 2015]. AAPA are asynchronous 𝜔 -automata with a parity accep-tance condition. Asynchronous automata on finite words were already introduced in the seminalpaper by Rabin and Scott [Rabin and Scott 1959] on finite automata and later considered in manyother contexts [Furia 2014; Geidmanis 1987; Ibarra and Trân 2013]. On infinite words, Büchi au-tomata on multiple input words were considered in the context of recursion theory and descriptiveset theory [Finkel 2006, 2016]. As far as we are aware, automata of this type with a parity accep-tance condition or alternation have not been studied yet and neither have algorithms for decidablerestrictions and their exact complexity. A different line of research discusses variants of asynchro-nous automata for the analysis of concurrent programs [Muscholl 1996; Peled and Penczek 1996;Zielonka 1987]. However, unlike AAPA, these models are concerned with language recognitionfor trace languages in the context of concurrent systems. For AAPA (and H 𝜇 ), we use two types ofrestrictions: 𝑘 -synchronicity and 𝑘 -context-boundedness. The first restriction has been discussedin the context of multitape automata [Furia 2014; Ibarra and Trân 2013], while the second restric-tion is inspired by a similar condition used in the analysis of concurrent programs [Atig et al. 2009;Bansal and Demri 2013; Qadeer 2008; Qadeer and Rehof 2005]. In [Krebs et al. 2017], Krebs et. al.consider a team semantics based approach to the verification of hyperproperties using variantsof LTL with synchronous and asynchronous semantics. Of course, there is a large body of workon the analysis of asynchronous systems, e.g. [Durand-Gasselin et al. 2015; Esparza et al. 2016;Ganty and Majumdar 2012; Ganty et al. 2009]. However, we are not aware of any such work con-cerning hyperproperties. Let AP be a finite set of atomic propositions. A
Kripke Structure is a tuple K : = ( 𝑆, 𝑠 , 𝛿, 𝐿 ) where 𝑆 is a finite set of states, 𝑠 ∈ 𝑆 is an initial state, 𝛿 ⊆ 𝑆 × 𝑆 is a transition relation and 𝐿 : 𝑆 → AP is a labeling function. We assume that there are no states without outgoing edges, that is for each 𝑠 ∈ 𝑆 , there is an 𝑠 ′ ∈ 𝑆 with ( 𝑠, 𝑠 ′ ) ∈ 𝛿 . A path in a Kripke Structure K is an infinite sequence 𝑠 𝑠 ... ∈ 𝑆 𝜔 where 𝑠 is the initial state of K and ( 𝑠 𝑖 , 𝑠 𝑖 + ) ∈ 𝛿 holds for all 𝑖 ≥
0. We denote byPaths (K) the set of paths in K starting in 𝑠 . A trace is an infinite sequence from the set ( AP ) 𝜔 .For a path 𝑠 𝑠 ... , the induced trace is given by 𝐿 ( 𝑠 ) 𝐿 ( 𝑠 ) ... . We write Traces (K) to denote thetraces induced by paths of a Kripke Structure K starting in 𝑠 .Let Σ be a finite input alphabet. A (nondeterministic) regular transducer over Σ is a tuple T = ( 𝑄, 𝑞 , 𝛾 ) where 𝑄 is a finite set of control locations, 𝑞 ∈ 𝑄 is an initial location and 𝛾 : 𝑄 × Σ → 𝑄 × Σ is a transition function. Given a word 𝑤 = 𝑤 ...𝑤 𝑛 ∈ Σ ∗ , a run of T on 𝑤 is analternating sequence 𝑞 𝑣 𝑞 𝑣 ...𝑣 𝑛 𝑞 𝑛 such that ( 𝑞 𝑖 + , 𝑣 𝑖 + ) ∈ 𝛾 ( 𝑞 𝑖 , 𝑤 𝑖 + ) for all 0 ≤ 𝑖 < 𝑛 . We thencall T ( 𝑤 ) : = 𝑣 ...𝑣 𝑛 ∈ Σ ∗ an output of T on 𝑤 . Intuitively, a nondeterministic regular transducercan be seen as a nondeterministic finite automaton (NFA) with output.An Alternating Parity Automaton (APA) over Σ is a tuple A = ( 𝑄, 𝑞 , 𝜌, Ω ) such that 𝑄 is a finite,non-empty set of control locations, 𝑞 ∈ 𝑄 is an initial control location, 𝜌 : 𝑄 × Σ → B + ( 𝑄 ) is afunction that maps control locations and input symbols to positive boolean formulas over controllocations and Ω : 𝑄 → { , , . . . 𝑘 } is a function that maps control locations to priorities. Weassume that every APA has two distinct states true with priority 0 and false with priority 1 suchthat 𝜌 ( true , 𝜎 ) = true and 𝜌 ( false , 𝜎 ) = false for all 𝜎 ∈ Σ . If 𝜌 ( 𝑞, 𝜎 ) only consists of disjunctions for :4 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem every 𝑞 and 𝜎 , we call an APA a Nondeterministic Parity Automaton (NPA) and denote 𝜌 ( 𝑞, 𝜎 ) asa set of control locations. Additionally, we allow states 𝑋 with 𝜌 ( 𝑋, 𝜎 ) = ⊥ and Ω ( 𝑋 ) = ⊥ , whichwe call holes [Lange 2005]. Intuitively, a hole is a state where the construction of an APA is notyet finished. By A [ 𝑋 : A ′ ] , we denote the APA A where the hole 𝑋 is replaced by the automaton A ′ .A tree 𝑇 is a subset of N ∗ such that for every node 𝑡 ∈ N ∗ and every positive integer 𝑛 ∈ N : 𝑡 · 𝑛 ∈ 𝑇 implies (i) 𝑡 ∈ 𝑇 (we then call 𝑡 · 𝑛 a child of 𝑡 ), and (ii) for every 0 < 𝑚 < 𝑛 , 𝑡 · 𝑚 ∈ 𝑇 . Weassume every node has at least one child. A path in a tree 𝑇 is a sequence of nodes 𝑡 𝑡 ... such that 𝑡 = 𝜀 and 𝑡 𝑖 + is a child of 𝑡 𝑖 for all 𝑖 ∈ N . A run of an APA A on an infinite word 𝑤 ∈ Σ 𝜔 is definedas a 𝑄 -labeled tree ( 𝑇 , 𝑟 ) where 𝑟 : 𝑇 → 𝑄 is a labelling function such that 𝑟 ( 𝜀 ) = 𝑞 and for everynode 𝑡 ∈ 𝑇 with children 𝑡 , ..., 𝑡 𝑘 , we have 1 ≤ 𝑘 ≤ | 𝑄 | and the valuation assigning true to thecontrol locations 𝑟 ( 𝑡 ) , ..., 𝑟 ( 𝑡 𝑘 ) and false to all other control locations satisfies 𝜌 ( 𝑟 ( 𝑡 ) , 𝑤 (| 𝑡 |)) . Arun ( 𝑇 , 𝑟 ) is an accepting run iff for every path 𝑡 𝑡 ... in 𝑇 , the lowest priority occuring infinitelyoften is even. A word 𝑤 is accepted by A iff there is an accepting run of A on 𝑤 . The set of infinitewords accepted by A is denoted by L (A) . Extending the notion of holes, we write
A [ 𝑋 : L] fora language L ⊆ Σ 𝜔 to denote A [ 𝑋 : A ′ ] for some automaton A ′ with L (A ′ ) = L . We call anAPA (resp. NPA) an Alternating Büchi Automaton (resp. Nondeterministic Büchi Automaton) iffits priorities are 0 and 1. We abbreviate these automata as ABA and NBA. In the remainder of thispaper, we use known theorems about parity and Büchi automata (without holes): Proposition 2.1 ([Dax and Klaedtke 2008]).
For every APA A with 𝑛 states and 𝑘 priorities,there is a nondeterministic Büchi Automaton with O ( 𝑛 · 𝑘 · 𝑙𝑜𝑔 𝑛 ) states accepting the same language. Proposition 2.2.
For every APA A with 𝑛 states and 𝑘 priorities, there is an APA A with 𝑛 statesand 𝑘 priorities that recognises the complement language. Proposition 2.3.
The emptiness problem is
PSPACE -complete for APA and
NLOGSPACE -completefor NPA and NBA.
Proposition 2.2 and Proposition 2.3 can be found e.g. in [Demri et al. 2016]. On multiple occa-sions in this paper, we use a function for nested exponentials. Specifically, we define 𝑔 𝑐,𝑝 ( , 𝑛 ) : = 𝑝 ( 𝑛 ) and 𝑔 𝑐,𝑝 ( 𝑑 + , 𝑛 ) : = 𝑐 𝑔 𝑐,𝑝 ( 𝑑,𝑛 ) for a constant 𝑐 > 𝑝 . For 𝑐 = 𝑝 = 𝑖𝑑 ,i.e. the identity function, we omit the subscripts in order to improve readability. By slight abuseof notation, we say that a function 𝑓 is in O( 𝑔 ( 𝑑, 𝑛 )) if 𝑓 is in O( 𝑔 𝑐,𝑝 ( 𝑑, 𝑛 )) for some constant 𝑐 > 𝑝 . We straightforwardly extend this notion to multiple 𝑔 functions, wheredifferent constants 𝑐 > 𝑝 can be used for the various 𝑔 functions. Also, we write SPACE ( 𝑔 ( 𝑑, 𝑛 )) as an abbreviation for Ð 𝑐,𝑝 SPACE ( 𝑔 𝑐,𝑝 ( 𝑑, 𝑛 )) . We introduce a new class of automata for the asynchronous traversal of multiple 𝜔 -words. Definition 3.1 (Alternating Asynchronous Parity Automaton).
Let 𝑀 = { , , . . . 𝑛 } be a set ofdirections and Σ an input alphabet. An Alternating Asynchronous Parity Automaton (AAPA) is atuple A = ( 𝑄, 𝜌 , 𝜌, Ω ) where • 𝑄 and Ω are the same as in an APA, • 𝜌 ∈ B + ( 𝑄 ) is a positive boolean combination of initial states, and • 𝜌 : 𝑄 × Σ × 𝑀 → B + ( 𝑄 ) maps triples of control locations, input symbols and directions topositive boolean combinations of control locations.Just as for APA, we call an AAPA where 𝜌 ( 𝑞, 𝜎, 𝑑 ) and 𝜌 only consist of disjunctions a Nonde-terministic Asynchronous Parity Automaton (NAPA). Compared to an APA, where a single word :5 over Σ is given as input, an AAPA has access to 𝑛 input words over Σ and can perform steps onthem individually. The 𝑀 argument of the transition function indicates on which input word toprogress. Note that any APA can be seen as an AAPA with 𝑛 =
1. The definition of a run 𝑇 of anAAPA is similar to the one for a run of an APA, but with the following modifications: • the run is defined over 𝑛 input words 𝑤 , ..., 𝑤 𝑛 ∈ Σ 𝜔 instead of a single word 𝑤 , • for each 𝑡 ∈ 𝑇 , we have 𝑛 offset counters 𝑐 𝑡 , ..., 𝑐 𝑡𝑛 starting at 𝑐 𝑡𝑖 = 𝑖 and 𝑡 with | 𝑡 | ≤ • we have { 𝑟 ( 𝑡 )| 𝑡 ∈ 𝑇 , | 𝑡 | = } | = 𝜌 , and • when node 𝑡 ∈ 𝑇 \ { 𝜀 } has children 𝑡 , ..., 𝑡 𝑘 , then there is a 𝑑 ∈ 𝑀 such that (i) 𝑐 𝑡 𝑖 𝑑 = 𝑐 𝑡𝑑 + 𝑐 𝑡 𝑖 𝑑 ′ = 𝑐 𝑡𝑑 ′ for all 𝑖 and 𝑑 ′ ≠ 𝑑 , (ii) we have 1 ≤ 𝑘 ≤ | 𝑄 | and (iii) the valuation assigningtrue to 𝑟 ( 𝑡 ) , ..., 𝑟 ( 𝑡 𝑘 ) and false to all other states satisfies 𝜌 ( 𝑟 ( 𝑡 ) , 𝑤 𝑑 ( 𝑐 𝑡𝑑 ) , 𝑑 ) .These automata are particularly suitable for the analysis of our new logic 𝐻 𝜇 , which we intro-duce in the next section. Indeed, AAPA and 𝐻 𝜇 are able to express the same asynchronous restric-tions on multiple 𝜔 -words, as shown in Section 5. In order to compare AAPA to different automatamodels over a single input word, we interpret the 𝑛 input words 𝑤 , ..., 𝑤 𝑛 with 𝑤 𝑖 = 𝑤 𝑖 𝑤 𝑖 ... ∈ Σ 𝜔 as a single word 𝑤 = ( 𝑤 , ..., 𝑤 𝑛 ) ( 𝑤 , ..., 𝑤 𝑛 ) ... ∈ ( Σ 𝑛 ) 𝜔 . We introduce some notation for the asyn-chronous manipulation of such words. For this purpose, let 𝑣 = ( 𝑣 , ..., 𝑣 𝑛 ) ∈ N 𝑛 be a vector. Then,we use 𝑤 [ 𝑣 ] = ( 𝑤 𝑣 , ..., 𝑤 𝑣 𝑛 𝑛 ) ( 𝑤 𝑣 + , ..., 𝑤 𝑣 𝑛 + 𝑛 ) ... to denote 𝑤 shifted left according to entries in 𝑣 .We write 𝑣 + 𝑣 ′ and 𝑣 + 𝑒 𝑖 for standard vector addition with an arbitrary vector 𝑣 ′ and a unit vector 𝑒 𝑖 = ( , ..., , , , ..., ) , respectively.We establish some properties of AAPA and their nondeterministic counterparts. First, a standardargument using alternation and priority shifts gives us the following result about AAPA: Theorem 3.2.
AAPA are closed under union, intersection and complement. The constructions arelinear in the size of the input automata.
However, for their nondeterministic counterpart, the above result does not hold:
Theorem 3.3.
NAPA are not closed under intersection. In particular, it is undecidable whether thereis a NAPA recognizing the intersection
L (A ) ∩ L (A ) for two NAPA A , A . . Proof.
Let finite state Asynchronous Automata (AA) be NAPA on finite words which acceptinput words by reaching accepting states. Fix two languages L , L recognizable by AA over analphabet Σ . Let ⊥ be a fresh symbol. Then ˆ L = { 𝑤 ⊥ 𝜔 | 𝑤 ∈ L } and ˆ L = { 𝑤 ⊥ 𝜔 | 𝑤 ∈ L } are recognizable by two NAPA. By a standard argument, from a NAPA recognizing ˆ L ∩ ˆ L , weobtain an AA recognizing L ∩ L , but AA are not closed under intersection and it is undecidablewhether there is an AA recognizing the intersection [Furia 2014]. (cid:3) Due to this difference in closure properties, we obtain a gap in expressivity:
Corollary 3.4.
There is an AAPA such that no NAPA recognises the same language. It is undecid-able whether an AAPA can be translated to a NAPA that recognises the same language.
A related result regarding the intersection of NAPA is:
Theorem 3.5.
The emptiness problem for the intersection of NAPA is undecidable.
Proof.
The proof is by reduction from the Post Correspondence Problem (PCP). Let 𝐼 = ( 𝑤 , 𝑢 ) . . . ( 𝑤 𝑛 , 𝑢 𝑛 ) be a PCP instance. We again choose ⊥ as a fresh symbol. Let L be the language consisting of allpossible concatenations of ( 𝑤 𝑖 , 𝑢 𝑖 ) followed by (⊥ , ⊥) 𝜔 and L be the language {( 𝑎 , 𝑎 ) ( 𝑎 , 𝑎 ) . . . ( 𝑎 𝑙 , 𝑎 𝑙 ) (⊥ , ⊥) 𝜔 | 𝑎 𝑖 ∈ Σ , 𝑙 ∈ N } . L can be recognised by a NAPA with loops on an initial state that first read 𝑤 𝑖 in direc-tion 1 and then read 𝑢 𝑖 in direction 2. L can be recognised by inspecting a single symbol of bothdirections in turns and making sure that the same symbol is being read. Then the PCP instancehas a solution iff L ∩ L is non-empty. (cid:3) :6 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem Since the intersection of two NAPA can be recognised by an AAPA, we immediately obtain:
Corollary 3.6.
The emptiness problem for AAPA is undecidable.
Furthermore, since AAPA can be used to decide inclusion problems for two-tape Büchi automata[Finkel and Lecomte 2009], this result can be strengthened drastically. We refer the interestedreader to the appendix for a detailed proof of the strengthened claim.
Theorem 3.7.
The emptiness problem for AAPA is hard for the level Σ of the analytical hierarchy. When comparing NAPA to synchronous automata, it is not hard to see that the language L = {( 𝑎, 𝑎, 𝑎 ) 𝑛 ( 𝑏, 𝑎, 𝑎 ) 𝑛 ( 𝑏, 𝑏, 𝑎 ) 𝑛 ( 𝑏, 𝑏, 𝑏 ) 𝜔 | 𝑛 ∈ N } can be recognised by a NAPA, while there is no syn-chronous parity automaton recognising L since it is not an 𝜔 -regular language. Despite this in-crease in expressive power, the emptiness problem for NAPA can still be reduced to an emptinesstest on their synchronous counterparts. Theorem 3.8.
The emptiness problem for NAPA is
PSPACE -complete.
Proof.
Given a NAPA A with 𝑚 states and 𝑛 input words over the alphabet Σ , we construct aNPA A ′ with 𝑚 · | Σ | 𝑛 states over Σ that stores the currently accessible input vector in its state andguesses the next input symbol for each direction. For hardness, we can encode the configurationsof a Turing machine whose space is bounded by a polynomial 𝑝 ( 𝑥 ) in a NAPA with 𝑛 = 𝑝 ( 𝑥 ) directions. Consistency of successive configurations can be checked locally such that there is noneed to represent full configurations in the input alphabet or the state space. Therefore, the sizeof the NAPA stays polynomial despite of the fact that the Turing machine has exponentially manyconfigurations. Thus, we can check for the existence of an accepting run of the Turing machine. (cid:3) Since the translation used in the proof is only exponential in 𝑛 and NAPA subsume synchronousBüchi automata, we obtain the following corollary: Corollary 3.9.
For fixed 𝑛 , the emptiness problem for NAPA is NLOGSPACE -complete.
As emptiness of AAPA cannot be decided and as alternation elimination is not possible, westudy analyses that consider well-specified subclasses of runs and identify fragments of AAPA forwhich these analyses are precise. 𝑘 -synchronous analysis of AAPA Definition 3.10.
We call a run 𝑇 of an AAPA with 𝑛 input words 𝑘 -synchronous for a 𝑘 ∈ N ∪{∞} ,if in every node 𝑡 in 𝑇 , the offset counters 𝑐 𝑡 , ..., 𝑐 𝑡𝑛 satisfy | 𝑐 𝑡𝑖 − 𝑐 𝑡𝑗 | ≤ 𝑘 for all 𝑖 and 𝑗 .Intuitively, a 𝑘 -synchronous run has the property that the AAPA can never be ahead more than 𝑘 steps in one direction than in any other. This gives rise to an approximate analysis where onlythe 𝑘 -synchronous runs of an AAPA are considered. Since a 𝑘 -synchronous run is particularly a 𝑘 ′ -synchronous run for all 𝑘 ′ ≥ 𝑘 , the approximation improves with increasing 𝑘 , capturing thewhole semantics at 𝑘 = ∞ for all AAPA. Since an analysis with 𝑘 = ∞ is impossible, we assumethat 𝑘 < ∞ in the remainder of this section. We show that 𝑘 -synchronous runs of an AAPA canbe analysed via a reduction to APA: Theorem 3.11.
For every AAPA A over Σ with 𝑙 priorities, 𝑛 input words and 𝑚 states, there is anAPA over Σ 𝑛 with O( 𝑙 · 𝑚 · | Σ | 𝑘 · 𝑛 ) states recognizing all words accepted by a 𝑘 -synchronous run of A . Proof.
We read input vectors synchronously and maintain a 𝑘 · 𝑛 window of the input wordsin the state space. Since in a 𝑘 -synchronous run of an AAPA each direction can be ahead each :7 other direction at most 𝑘 steps, no direction can leave this window without all rearmost directionsperforming steps. The content of the window can therefore be stored in an APA’s state space.We simulate steps of the AAPA by moving markers forward in each row of the window. Whenthe last marker leaves the rearmost column of the window, a new input vector is read and addedto the front. A step that would leave the window must not be simulated since that would movethat direction 𝑘 + false instead. Thisway, we ensure that each run of the APA simulates a 𝑘 -synchronous run of the AAPA.Since an input vector is only read when all directions have left the last column, one step of theAPA has to simulate multiple successive steps of the AAPA. This is done by a nondeterministicchoice over all sequences of steps resulting in the rearmost column being erased. In order to cor-rectly mirror the priorities, for each simulated sequence of steps, we move to a copy of the reachedstate annotated with the lowest priority encountered in this sequence.In order to fill the window, 𝑘 input vectors are read in an initialisation phase of the APA. Overall,the size of the state spaces increases by a factor of O( 𝑙 · | Σ | 𝑘 · 𝑛 ) . (cid:3) Note that Theorem 3.11 yields an underapproximation of an AAPA’s behaviour. By transitioningto true instead of false at a violation of 𝑘 -synchronicity, we can instead perform an overapproxima-tion that ignores non 𝑘 -synchronous branches when determining whether a run is accepting. Ob-viously, these approximations yield exact results for AAPA where all runs are 𝑘 -synchronous. Wecall such AAPA 𝑘 -synchronous. Synchronicity can be enforced by a syntactic restriction, namelythat in all cycles in the transition graph, every direction occurs the same number of times. The pa-rameter 𝑘 is then induced by the largest difference of the number of occurences of two directionson any path in the transition graph. We establish a tight complexity bound for this analysis: Theorem 3.12.
The problem to decide whether there is a 𝑘 -synchronous accepting run of an AAPAand thus the emptiness problem for k-synchronous AAPA is EXPSPACE -complete.
Proof.
Using Theorem 3.11, we can construct an APA whose size is exponential in the sizeof the input. Since APA can be tested for emptiness in
PSPACE , this establishes membership in
EXPSPACE . For hardness, we reduce from the acceptance problem for deterministic exponentiallyspace bounded Turing machines (DTMs). Let M be a DTM with control locations 𝑄 and spacecomplexity 𝑓 ( 𝑥 ) = 𝑝 ( 𝑥 ) for some polynomial 𝑝 . For a given input 𝑤 of M , we construct a 𝑘 -synchronous AAPA A with 𝑛 : = 𝑝 (| 𝑤 |) + 𝑘 : = M separated by a marker. We en-code such configurations by words over { , } ∪ { , } × 𝑄 where the occurrence of a tuple ( 𝑏, 𝑞 ) indicates that the head points to this position, that this position of the tape contains the bit 𝑏 , andthat the current state is 𝑞 . Presence of initial and final configuration can easily be checked viaalternation.Additionally, we need to check that successive configurations are constructed in accordancewith the transition function of M . For this purpose, we have to count to exponentially large val-ues in order to compare positions with the same (or neighbouring) index in successive configura-tions with each other. Since we cannot store exponentially large counter values in the state, weinstead construct a virtual counter gadget : for every additional direction, we can enforce that itconsist of the word ( ) 𝜔 . As we illustrate in Figure 1, the concatenation of the current valuesof these directions is interpreted as a counter. Then, we conjunctively move to each position in asynchronous manner, save the value of the first direction in a state of the AAPA and initialise thecounter: if the directions have value 1, we advance them by one symbol and otherwise, we keepthe position. We then advance the first direction and increase our virtual counter by changing theappropriate bits via single advancements and maintaining the other bits by nondeterministically :8 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem DTM 1
Fig. 1. A -synchronous virtual counter gadget with bits and current value . The Turing machine is instate 𝑞 and its head is on the fourth bit of the current configuration. either maintaining the current symbol or advancing the direction by two positions. Then, one ofthe nondeterministic choices preserves 2-synchronicity. When the counter has reached 2 𝑝 ( | 𝑤 |) , wehave found the matching tape cell in the next configuration and can check whether it is admissibleby comparing its value with the value saved in the control state of the AAPA.It is easy to see that A has a 2-synchronous accepting run iff M accepts 𝑤 . (cid:3) Note that in the proof of the lower bound detailed above, 𝑘 can be chosen as a fixed value. Thisraises the question whether there is a construction for the emptiness test that is exponential onlyin 𝑛 , but not in 𝑘 . Indeed, we can construct an APA that has a non-empty language if and only ifthe given AAPA has an accepting 𝑘 -synchronous run. However, unlike the APA in the proof ofTheorem 3.11, the constructed APA only accepts a certain encoding of the word accepted by the 𝑘 -synchrounous run instead of the word itself. More specifically, it expects the input word to consistof concatenations of input windows (with size 𝑘 · 𝑛 ) of the original input words, one window foreach simulated step of the AAPA. For each direction it maintains a counter indicating its currentposition in the input window. Using alternation and additional counters, the APA can ensure thatthe succession of input windows is consistent. A single step of the AAPA can then be simulatedby reading the current copy of the input window in order to check that the direction inducing thestep has the correct symbol. An upper bound on the number of states of this APA is dominated bythe 𝑛 counters up to 𝑘 indicating on what position in the window each direction is. This resultsin a factor of 𝑘 𝑛 on the number of states but stays polynomial for fixed 𝑛 . As single steps of theAAPA are simulated separately, this construction also avoids the additional factor 𝑙 .Together with the fact that already the emptiness problem for APA is PSPACE-hard, these con-siderations lead to the following corollary. Corollary 3.13.
For fixed 𝑛 , the emptiness problem for 𝑘 -synchronous AAPA is PSPACE -complete. 𝑘 -context-bounded analysis of AAPA Since different words can only diverge from each other up to 𝑘 steps, the ability of AAPA toasynchronously traverse words is severely restricted in 𝑘 -synchronous runs. We thus consider afurther class of runs where the positions on different words can diverge unboundedly. For thisrestriction, we introduce the notion of a context: Definition 3.14.
A context is a (possibly infinite) subpath 𝑝 = 𝑡 𝑡 ... in a run of an AAPA over 𝑤 , ..., 𝑤 𝑛 such that transitions between successive states all use the same direction, that is thereis a 𝑑 ∈ 𝑀 such that for all 𝑖 ∈ { , ..., | 𝑝 | − } we have 𝑐 𝑡 𝑖 + 𝑑 = 𝑐 𝑡 𝑖 𝑑 +
1. We call a run 𝑇 of an AAPA 𝑘 -context-bounded if every path in 𝑇 consists of at most 𝑘 contexts.We propose an approximate analysis which checks only for the existence of a 𝑘 -context-boundedaccepting run of an AAPA. In the appendix, we show that the restriction that AAPA can only con-sider a single direction during each context is well-chosen in the sense that additionally allowingcontexts in which a selection of directions is traversed synchronously leads to undecidability. :9 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 ... ... ... ... ... ... Fig. 2. A -context-bounded run of an AAPA where green nodes move in direction and red nodes move indirection . The corresponding guess is ( 𝑞 , {( 𝑞 , { 𝑞 , 𝑞 }) , ( 𝑞 , { 𝑞 , 𝑞 })}) . 𝜀𝑞 𝑞 𝑞 𝑞 𝑞 𝑞 𝐺 𝜀𝑞 𝑞 𝑞 𝑞 𝑡 𝑞 𝑞 𝑡 𝑡𝐺 𝐺 𝐺 𝐺 𝐺 𝐺 ... ... ... ... ... ... ... Fig. 3. Translation of the run into multiple synchronous runs with 𝐺 = {( 𝑞 , 𝐺 ) , ( 𝑞 , 𝐺 )} , 𝐺 = { 𝑞 , 𝑞 } and 𝐺 = { 𝑞 , 𝑞 } . As can be seen in nodes and in the green tree, one subtree starting in each nodesuffices to be adapted since they all start with the same offset on their word. Our analysis is based on the observation that a run of an AAPA can be divided into maximalsections only reading a single direction and that in the case of a 𝑘 -context-bounded run, the inter-action between neighbouring sections can be described in a finite way. Given such a description,each direction can be analysed independently from the others. We illustrate this with an exam-ple with just two directions and two context switches. Consider the run in Figure 2 where greenstates move in direction 1 and red states move in direction 2 and inspect the left part of the tree.Instead of moving to 𝑞 , the automaton responsible for direction 1 can directly move to 𝑞 and 𝑞 from 𝑞 if the automaton responsible for direction 2 ensures existence of an accepting run from 𝑞 that reaches only these two direction 1 states. Consequently, the automaton responsible fordirection 2 can safely cut off the branches between states 𝑞 and 𝑞 as well as between 𝑞 and 𝑞 since the automaton responsible for direction 1 ensures an accepting continuation from 𝑞 and 𝑞 .We furnish states with descriptions of this interplay between assumptions and guarantees and callsuch descriptions guesses . These guesses allow us to split the run from Figure 2 into two runs onsingle directions as shown in Figure 3. Guesses are nested and the nesting depth corresponds tothe number of context switches that can still be performed in a 𝑘 -context-bounded run. They areconstructed inductively: In the most simple case, where no context switch can be made, the guessis empty. If context switches are still possible, the guess is a set of states enriched with guesseswith one context switch less.Formally, when analysing an AAPA A = ( 𝑄, 𝜌 , 𝜌, Ω ) over 𝑛 words from Σ 𝜔 , we inductivelydefine the set of guesses 𝐺 𝑖 for 𝑖 context switches as follows: 𝐺 : = {⊥} , 𝐺 𝑖 + : = 𝑄 × 𝐺 𝑖 for 𝑖 ≥ 𝐺 : = Ð 𝑘 − 𝑖 = 𝐺 𝑖 . For simplicity, we assume that each state 𝑞 ∈ 𝑄 moves in a unique direction 𝑑 (i.e.the transition function yields false for other directions); the set of states that move in direction 𝑑 is called 𝑄 𝑑 . This increases the size of the state space by a factor of at most 𝑛 . As described above,we skip sections belonging to direction 𝑑 ′ ≠ 𝑑 when considering a direction 𝑑 . In order to extract :10 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem the potential reentry points for directions 𝑑 from a guess, we use a frontier function 𝐹 𝑑 : 𝐺 → 𝐺 defined as follows: 𝐹 𝑑 ( 𝑔 ) = {( 𝑞, 𝑔 ′ ) ∈ 𝑔 | 𝑞 ∈ 𝑄 𝑑 } ∪ Ø { 𝐹 𝑑 ( 𝑔 ′ )|( 𝑞, 𝑔 ′ ) ∈ 𝑔, 𝑞 ∉ 𝑄 𝑑 } . For the analysis, we define from A an APA-like structure S = ( 𝑆, 𝜌 𝑆 , Ω 𝑆 ) over the alphabet Σ with 𝑆 = 𝑄 × 𝐺 ,𝜌 𝑆 (( 𝑞, 𝑔 ) , 𝜎 ) = Ü { Û 𝑄 ′ 𝑑 × { 𝑔 } ∧ Û { 𝐹 𝑑 ( 𝑔 ′′ ) | ( 𝑞, 𝑔 ′′ ) ∈ 𝑔 ′ } | 𝑄 ′ 𝑑 ⊆ 𝑄 𝑑 , 𝑔 ′ ⊆ 𝑔, 𝑄 ′ 𝑑 ∪ { 𝑞 | ∃ 𝑔 ′′ : ( 𝑞, 𝑔 ′′ ) ∈ 𝑔 ′ } | = 𝜌 ( 𝑞, 𝜎, 𝑑 )} for 𝑞 ∈ 𝑄 𝑑 , and Ω 𝑆 (( 𝑞, 𝑔 )) = Ω ( 𝑞 ) . Thus, the successors of a state ( 𝑞, 𝑔 ) in S are composed of two sets: a set 𝑄 ′ 𝑑 of states thatbelong to the same section and the set of reentry points extracted from the second component ofthe guesses in 𝑔 ′ annotating states for other directions in 𝑔 . These sets are chosen such that 𝑄 ′ 𝑑 andthe states in the first component of the guesses in 𝑔 ′ satisfy the corresponding transition functionof A .For this structure, we have: Lemma 3.15.
There is a 𝑘 -context-bounded accepting run in A iff there is a 𝑔 ∈ 𝐺 such that S accepts from Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } for all 𝑑 ∈ { , ..., 𝑛 } and some 𝑄 with 𝑄 | = 𝜌 . With a little effort, this structure can be translated to an NPA over Σ 𝑛 recognising the wordsaccepted by 𝑘 -context-bounded runs of the original AAPA A . For this, transitions in each state 𝑞 ∈ 𝑄 𝑑 of S are adjusted to the input alphabet Σ 𝑛 by allowing arbitrary symbols in other di-rections than 𝑑 . The key insight is that the choice of the guess 𝑔 from Lemma 3.15 can be inte-grated into an NPA after an alternation removal as a nondeterministic choice. We parameterisethe structure S in the initial guess 𝑔 (for which we assume w.l.o.g. 𝑔 ∈ 𝐺 𝑘 − ) and obtain a struc-ture S( 𝑔 ) of size O( 𝑔 ( 𝑘 − , |A|)) , where the fixed 𝑔 ∈ 𝐺 indicates the initial guess. The ini-tial state of S( 𝑔 ) can nondeterministically guess a state set 𝑄 with 𝑄 | = 𝜌 and then move to Ó 𝑑 ∈{ ,...,𝑛 } Ó 𝑞 ∈ 𝑄 Ó 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) to obtain the desired test. We eliminate alternation from S( 𝑔 ) to obtain an NPA S ′ ( 𝑔 ) of size O( 𝑔 ( 𝑘 − , |A|)) for each guess 𝑔 ∈ 𝐺 𝑘 . Thus, an NPA whichnondeterministically guesses a 𝑔 ∈ 𝐺 and then moves to S ′ ( 𝑔 ) performs the desired test fromLemma 3.15. Since there are | 𝐺 | = O( 𝑔 ( 𝑘 − , |A|)) possible guesses, this NPA’s size asymptoti-cally is O( 𝑔 ( 𝑘 − , |A|)) as well. We obtain: Corollary 3.16.
For every AAPA A , there is an NPA with O( 𝑔 ( 𝑘 − , |A|)) states recognizing allwords accepted by a 𝑘 -context-bounded run of A . Note that similar to the construction for Theorem 3.11, a violation of 𝑘 -context-boundednessleads to a transition to false in our construction (in this case by an empty disjunction in the tran-sition function since there is no 𝑔 ′ ⊆ 𝑔 fulfilling the conditions for 𝑔 = ⊥ ). This again yields anunderapproximation of the AAPA’s behaviour and can be transformed into an overapproximationby instead transitioning to true .Again, the developed analysis is precise for AAPA having only 𝑘 -context-bounded runs, whichwe call 𝑘 -context-bounded AAPA. A syntactic restriction ensuring that an AAPA is context-boundedis that each strongly connected component (SCC) in its transition graph uses steps of a uniquedirection only. The parameter 𝑘 is then induced by the maximum number of switches betweendirections in the DAG of SCCs.We show that the size of this construction cannot asymptotically be improved. :11 Theorem 3.17.
The problem to decide if there is a 𝑘 -context-bounded accepting run of an AAPA A and thus the emptiness problem for 𝑘 -context-bounded AAPA A is complete for ( k − ) EXPSPACE . For the proof, we establish two helpful lemmas about AAPA with context bounds. We make useof Stockmeyer’s nested index encoding [Stockmeyer 1974]. For a (finite) word 𝑤 = 𝑤 ...𝑤 𝑛 , theencoding is inductively defined by: 𝑠𝑡𝑜𝑐𝑘 ( 𝑤 ) : = [ 𝑤 ...𝑤 𝑛 ] 𝑠𝑡𝑜𝑐𝑘 𝑘 + ( 𝑤 ) : = [ 𝑘 + 𝑠𝑡𝑜𝑐𝑘 𝑘 ( 𝑏𝑖𝑛 ( )) 𝑤 ...𝑠𝑡𝑜𝑐𝑘 𝑘 ( 𝑏𝑖𝑛 ( 𝑛 )) 𝑤 𝑛 ] 𝑘 + Here, 𝑏𝑖𝑛 ( 𝑖 ) denotes a binary encoding of 𝑖 ∈ N . For our proofs, it will be useful to use a least-significant-bit-first encoding for 𝑏𝑖𝑛 ( 𝑖 ) . Brackets [ 𝑘 and ] 𝑘 allow us to identify the beginning andend of a level 𝑘 encoding without necessarily knowing the exact length of the encoding. For level 𝑘 ≥
1, every letter of the word is preceded by its index in the word. These indices are written inbinary and thus form words of their own, which can be encoded as well. The number of times theencoding of indices is nested is determined by the level of the encoding. Since indices for a wordof length 𝑛 can be encoded in words of length 𝑙𝑜𝑔 𝑛 , every level of the encoding allows to encodeexponentially larger words when starting from a fixed size. Thus, with level 0 encodings of length 𝑚 , one can encode words of length 𝑔 ( 𝑘, 𝑚 ) on level 𝑘 .The first lemma shows that by using this encoding, we can perform certain operations on se-quences of words of length 𝑔 ( 𝑘, 𝑚 ) . It is formulated in a generic way using regular transducersand regular languages in order to be applicable to nested index encodings as well as successionsof Turing machine configurations. The first is needed for its inductive proof; the second allows usto apply it for the proof of Theorem 3.17.The operations we need for our result are the following: • Checking whether the first and the last word of the sequence are contained in two givenregular languages and • checking whether each word (apart from the first one) is obtained by applying a given regulartransducer to the previous word.More concretely, for sequences of indices, we want to check: • Is the first index 0 ... • Is the last index 1 ... • Is each successor obtained by adding 1 to the previous index?For Turing machine configurations, the checks are: • Is the first configuration the starting configuration? • Is the last configuration an accepting configuration? • Is each configuration obtained by applying the Turing machine’s transition function to itspredecessor configuration?Note that for our reduction to be polynomial, each of the regular language acceptors and regulartransducers have to be of polynomial size in the input size, which they indeed are.With these considerations in mind, it is straightforward to see that an application of the nextLemma can be used to encode the acceptance problem of 𝑔 ( 𝑘 − , 𝑝 ( 𝑛 )) space bounded Turingmachines in 𝑘 -context-bounded AAPA. It is also easy to see that an application to sequences ofindices can be used to check the validity of nested index encodings. We now formulate the lemma: Lemma 3.18.
Given a size bound 𝑚 , an AAPA of size polynomial in 𝑚 with access to directions 𝑖 and 𝑗 , as well as 𝑘 ≥ context switches starting in a context for direction 𝑖 can: (1) enforce that the word in direction 𝑖 contains a level 𝑘 − nested index encoding of all numbersbetween and 𝑔 ( 𝑘 − , 𝑚 ) − and :12 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem (2) enforce that the word in direction 𝑗 contains a sequence of level 𝑘 − encoded words 𝑤 𝑤 ...𝑤 𝑙 ,each of length 𝑔 ( 𝑘 − , 𝑚 ) , separated by markers, such that • 𝑤 is contained in a given regular language with an acceptor of size O( 𝑚 ) , • 𝑤 𝑙 is contained in another given regular language with an acceptor of size O( 𝑚 ) , and • for ℎ < 𝑙 , 𝑤 ℎ + is obtained from 𝑤 ℎ by applying a regular transducer of size O( 𝑚 ) . For the proof, we need the following lemma about AAPA:
Lemma 3.19.
Given a size bound 𝑚 , an AAPA of size polynomial in 𝑚 with 𝑘 ≥ context switchescan check whether the level 𝑘 − encoded words of length 𝑔 ( 𝑘 − , 𝑚 ) on directions 𝑖 and 𝑗 differ. Proof of Lemma 3.19.
The lemma can by shown by an induction on 𝑘 .In the base case 𝑘 =
1, the words are level 0 encoded and have length 𝑚 . Since the number ofcontext switches is restricted, it is not possible to check the two words for differing bits directlyby moving the two directions forward one step at a time. Instead, we use alternation to performthis test. For every number of steps 𝑠 up to 𝑚 , we disjunctively move 𝑠 steps forward in direction 𝑖 ,perform a context switch while memorising the number of steps and the last read symbol and thenmove the same number of steps in direction 𝑗 and compare the symbols. If the symbols differ, thewords cannot be the same and we accept, otherwise we reject. If all disjunctive tests reject, thereare no differing bits in the words and thus no accepting runs of the AAPA.In the inductive step , we have 𝑘 + 𝑘 ≥ 𝑔 ( 𝑘, 𝑚 ) . Here, we cannot count the index of a symbol in the word in the state spacesince that would increase the AAPA’s size beyond polynomial. We can, however, make use of thefact that the words are encoded such that each symbol in the word is preceded by its index in alevel 𝑘 − 𝑖 and disjunctively move to the start of some indexwhere we expect the two words to differ. We then perform a context switch and conjunctively testfor each position whether ( 𝑖 ) the indices differ or ( 𝑖𝑖 ) the symbols differ. Test ( 𝑖 ) can be doneusing the induction hypothesis with the 𝑘 remaining context switches. Test ( 𝑖𝑖 ) can be done usinga single additional context switch by moving to the symbol, memorising it, then moving to therespective symbol on direction 𝑖 and comparing the two. (cid:3) Proof of Lemma 3.18.
The lemma can be shown by an induction on 𝑘 .The base case is 𝑘 =
2. Item ( ) can be enforced by checking for the sequence of words bracketedby [ and ] in direction 𝑖 , that (i) these words have length 𝑚 , that (ii) the first one is 0 ...
0, that (iii)the last one is 1 ... ( ) .For ( ) , we first need to check that direction 𝑗 contains level 1 nested index encodings of wordsof length 2 𝑚 . This can be done in the same way as the check in item ( ) with the difference thatbits belonging to the encoded words themselves have to be skipped. These can, however, easily befound since the indices to be checked are bracketed by [ and ] . 𝑤 and 𝑤 𝑙 belonging to regularlanguages can be checked by their respective language acceptors, where bits not belonging tothe words themselves can be skipped. The main difference between the two tests is that 𝑙 is notdetermined and thus the point where 𝑙 is reached has to be guessed nondeterministically.For the third item of ( ) , the challenge is that corresponding positions in 𝑤 ℎ and 𝑤 ℎ + are O( 𝑚 ) steps apart from each other due to the length of the words 𝑤 ℎ and thus cannot be matched bycounting steps in the state of the AAPA as that would violate its size restriction. However, wecan use the two context switches and the fact that we have already checked item ( ) to ensurethis item. A test for this starts in direction 𝑖 and conjunctively switches to direction 𝑗 at the startof each number from item ( ) . The copy that performs the context switch before number 𝑛 thenhas to ensure the correct transduction of the 𝑛 th bit from each word 𝑤 ℎ to 𝑤 ℎ + . For this purpose, :13 it conjunctively moves to the start of each word 𝑤 ℎ and performs the transduction of bit 𝑛 . Asthe state space is not large enough to store the value 𝑛 , this is done in the following way: foreach position in 𝑤 ℎ it is checked whether (i) the correct transduction is being performed or (ii) theindex does not match 𝑛 . The latter is done by using Lemma 3.19 with the remaining context switch.Note that in order to ensure that the former check can be performed, the control location of thetransducer has to be tracked. This can be done by enriching the input word in direction 𝑗 withstates of the transducer in each bit and checking their correct succession during the transductiontests. Since the transducer’s control location is available for each position, the copy can applythe transducer’s transition function to the current bit and nondeterministically choose one of thepossible tuples of new control location and output bit. It then checks for the new control locationat the next position in 𝑤 ℎ . It also checks for the output bit in 𝑤 ℎ + by again disjunctively testingfor each position whether the bit is present or the index is different from 𝑛 .In the inductive step , we show the claim for 𝑘 + ( 𝐼𝐻 ) for 𝑘 . Forthe proof of ( ) we swap the roles of direction 𝑖 and 𝑗 and use the induction hypothesis: as arguedbefore, the first index found being 0 ...
0, the last index being 1 ... ( ) for 𝑘 + ( ) for 𝑘 . It remains to argue that claim ( ) for 𝑘 + ( ) for 𝑘 , which we have used here, are consistent with eachother since they formulate different requirements for the same direction. This is the case since ahigher level nested index encoding contains all level 𝑘 − ... ... 𝑘 − [ 𝑘 − and ] 𝑘 − .For claim ( ) , we first have to show that direction 𝑗 contains a sequence of words correctly level 𝑘 encoded. Like in the base case, this is done in a similar way as claim ( ) is enforced, namelyby checking presence of level 𝑘 − ... ... ( 𝐼𝐻 ) applies here since we can use the level 𝑘 − 𝑘 encodings that are present in direction 𝑖 as we have shown already.The main difference to the argument for claim ( ) is that we have to start on direction 𝑗 insteadof 𝑖 , conjunctively move to the start of each word, and then perform a context switch to be able touse ( 𝐼𝐻 ) . This results in 𝑘 + ( ) can easily be achievedby applying the language acceptors to the bits on the highest level of the encodings. The third itemof ( ) can also be shown as in the base case since after moving from direction 𝑖 to 𝑗 , there are 𝑘 context switches left which allows us to apply Lemma 3.19. (cid:3) Now, we are finally able to show the desired result.
Proof of Theorem 3.17.
Inclusion follows from the construction underlying Corollary 3.16. Forhardness, we reduce from the acceptance problem for 𝑔 ( 𝑘 − , 𝑝 ( 𝑛 )) space bounded Turing ma-chines using Lemma 3.18 as explained above. (cid:3) As mentioned in the introduction, our constructions and completeness proofs can also be appliedto alternating asynchronous automata (multitape automata) on finite words: the definitions of 𝑘 -synchronicity and 𝑘 -context-boundedness carry over in a direct manner and our algorithms for theemptiness tests can be applied. Furthermore, our hardness proofs can be transferred because ourreductions only require reachability of control states and are therefore not dependent on a paritycondition. Thus, we believe our results also shed new light on the theory of multitape automata. :14 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem 𝜇 -CALCULUS FOR HYPERPROPERTIES We now define our new logic, 𝐻 𝜇 . In order to capture asynchronous hyperproperties, we combineideas from HyperLTL [Finkbeiner et al. 2015], the polyadic 𝜇 -calculus [Andersen 1994], and thelinear time 𝜇 -calculus [Barringer et al. 1986; Vardi 1988] in a novel fashion. From HyperLTL wetake the ideas to quantify over path variables and to relativise atomic propositions to path variables.Inspired by the indexed next-operator of the polyadic 𝜇 -calculus, we relativise the next-operator toprogress only on a single path identified by a path variable. Finally, complex hyperproperties canbe specified by fixpoints. In this way, we extend the means provided by the linear time 𝜇 -calculusfor specifying properties to hyperproperties. Note that none of the logics that inspired the designof 𝐻 𝜇 is able to capture asynchronous hyperproperties. We use the following syntax: Definition 4.1 (Syntax).
Let AP be a set of atomic propositions, 𝑁 = { 𝜋 , ..., 𝜋 𝑛 } be a set of pathvariables and 𝜒 = { 𝑋 , ..., 𝑋 𝑚 } be a set of predicates. We define 𝐻 𝜇 formulas over AP, 𝑁 and 𝜒 bythe following grammar: 𝜑 : = ∃ 𝜋 .𝜑 |∀ 𝜋 .𝜑 | 𝜓𝜓 : = 𝑎 𝜋 | 𝑋 | (cid:13) 𝜋 𝜓 | 𝜓 ∨ 𝜓 |¬ 𝜓 | 𝜇𝑋 .𝜓 where 𝑎 ∈ AP, 𝜋 ∈ 𝑁 and 𝑋 ∈ 𝜒 . We call expressions generated by the nonterminal 𝜑 quantifiedformulas and expressions generated by the nonterminal 𝜓 quantifier-free formulas.In this paper, we will use two representations of a 𝐻 𝜇 formula 𝜑 . The first and more commonone is its syntax tree which we denote by 𝑟𝑒𝑝 𝑡 ( 𝜑 ) . For the second one, we compress the syntax treeinto a syntax directed acyclic graph (DAG) where syntactically equivalent subformulas share thesame nodes and write 𝑟𝑒𝑝 𝑑 ( 𝜑 ) for this representation. This offers an exponentially more succinctrepresentation for some families of formulas while not increasing the complexity of the algorithmswe consider. Then, we use | 𝜑 | 𝑡 : = | 𝑟𝑒𝑝 𝑡 ( 𝜑 )| and | 𝜑 | 𝑑 : = | 𝑟𝑒𝑝 𝑑 ( 𝜑 )| as two measures for the size of aformula. Note that since the DAG is obtained from the syntax tree by compressing it, | 𝜑 | 𝑑 ≤ | 𝜑 | 𝑡 holds for all formulas 𝜑 . Therefore, all complexity upper bounds in which the size of a formula ismeasured by | · | 𝑑 trivially transfer to complexity upper bounds for the other size measure.We add common connectives as syntactic sugar: true ≡ 𝑎 𝜋 ∨ ¬ 𝑎 𝜋 , false ≡ ¬ true , 𝜓 ∧ 𝜓 ′ ≡¬(¬ 𝜓 ∨ ¬ 𝜓 ′ ) , 𝜓 → 𝜓 ′ ≡ ¬ 𝜓 ∨ 𝜓 ′ , 𝜓 ↔ 𝜓 ′ ≡ 𝜓 → 𝜓 ′ ∧ 𝜓 ′ → 𝜓 and 𝜈𝑋 .𝜓 ≡ ¬ 𝜇𝑋 . ¬ 𝜓 [¬ 𝑋 / 𝑋 ] .Using these additional connectives, we impose some additional constraints on the syntax of 𝐻 𝜇 formulas. First, we assume that in a quantified formula 𝜑 all predicates are bound by a fixpointoperator. We also assume a strictly guarded form, where predicates 𝑋 are only allowed in scopeof an even number of negations inside 𝜇𝑋 .𝜑 and are directly preceeded by (cid:13) 𝜋 for some 𝜋 . Thelatter part of this can indeed be required without loss of generality: if one constructs a formulawhere there is no progress through (cid:13) 𝜋 between 𝜇𝑋 and 𝑋 , the fixpoint can equally be eliminated;if there is progress through (cid:13) 𝜋 for some 𝜋 , the (cid:13) 𝜋 operator can be moved inwards such that itdirectly occurs in front of 𝑋 . Second, a formula is in positive normal form when ¬ only occurs infront of atomic propositions and all bound predicates and path variables are distinct. Finally, wesay that a formula is in closed form when all path variables and predicates are bound.Since our logic extends the linear time 𝜇 -calculus with path quantification from HyperLTL, theformula constructs behave in similar ways as they do in those two logics. The main difference tothe linear time 𝜇 -calculus is that a formula reasons about a set of paths or traces instead of over asingle path or trace. Thus, constructs 𝑎 𝜋 and (cid:13) 𝜋 𝜓 are indexed to express that 𝑎 holds on path 𝜋 orthat 𝜓 holds when path 𝜋 moves one step forward. Indexing the (cid:13) -operator allows us to expressasynchronous behaviour. Path quantification ∃ 𝜋 .𝜑 or ∀ 𝜋 .𝜑 allows to require that for one or for allpaths 𝜋 , the set of paths obtained by adding 𝜋 to the previously considered set fulfills 𝜑 . Boolean :15 connectives are interpreted in the standard way. Finally, the constructs 𝑋 and 𝜇𝑋 .𝜓 allow us toformulate iterative properties by least fixpoint constructions.With this logic, it becomes possible to specify asynchronous hyperproperties, i.e. properties thatdo not rely on traversing different paths lockstepwise. We now sketch a few scenarios in whichthis is useful. One potential application of hyperlogics is the analysis of multithreaded software. Inthis scenario, different path variables used in a formula refer to different threads of the system andboth the interaction between threads as well as the specification are captured by the formula. Forexample, let 𝜋 and 𝜋 refer to executions of two different threads of a system that synchronise vialocking. Then the formula 𝜇𝑋 . ( 𝜓 error ∨ ( 𝜓 move ∧ (cid:13) 𝜋 𝑋 ) ∨ ( 𝜓 move ∧ (cid:13) 𝜋 𝑋 )) expresses that the twothreads can reach an error state identified by a formula 𝜓 𝑒𝑟𝑟𝑜𝑟 through a lock-sensitive interleaving.In this example, the atomic proposition lock 𝜋 𝑖 indicates that thread 𝑖 currently holds the lock andthe formula 𝜓 move 𝑖 = ¬ lock 𝜋 − 𝑖 ∨ ¬ (cid:13) 𝜋 𝑖 lock 𝜋 𝑖 expresses that thread 𝑖 can perform a step. Such aproperty clearly requires asynchronous traversal of the different paths. More complex interactionstrategies can be handled by modifying the formulas 𝜓 move 𝑖 .Asynchronicity is also useful in applications of hyperlogics in security, e.g. when steps observedby the environment do not correspond to the same number of steps in different paths of the model.This occurs, for instance, in models of software systems that reflect internal computations. In sucha situation, the formula 𝜑 = ∃ 𝜋 . ∀ 𝜋 ′ .𝜈𝑋 .𝜇𝑌 . (( 𝑎 𝜋 ↔ 𝑎 𝜋 ′ ) ∧ (cid:13) 𝜋 (cid:13) 𝜋 ′ 𝑋 ) ∨ (cid:13) 𝜋 ′ 𝑌 expresses that thereis a path 𝜋 such that for all paths 𝜋 ′ , 𝜋 and 𝜋 ′ repeatedly agree on the atomic proposition 𝑎 . How-ever, contrary to the HyperLTL formula ∃ 𝜋 . ∀ 𝜋 ′ . G( 𝑎 𝜋 ↔ 𝑎 𝜋 ′ ) a step on 𝜋 can be matched by anarbitrary number of steps on 𝜋 ′ . This illustrates how 𝐻 𝜇 allows us to relate a path 𝜋 describing ex-pected observable behaviour with paths 𝜋 ′ with additional unobservable steps. A similar techniquecan be used to specify asynchronous variants of classical hyperproperties from the literature likegeneralised non-interference or observational determinism [Clarkson and Schneider 2010]. Theformula 𝜑 obs = ∀ 𝜋 ∀ 𝜋 ′ . (( 𝜓 eqL → 𝜈𝑋 .𝜇𝑌 .𝜓 eqL ∧ (cid:13) 𝜋 (cid:13) 𝜋 ′ 𝑋 ) ∨ (¬ obs 𝜋 ∧ (cid:13) 𝜋 𝑌 ) ∨ (¬ obs 𝜋 ′ ∧ (cid:13) 𝜋 ′ 𝑌 ) with 𝜓 eqL = Ó 𝑎 ∈ 𝐿 𝑎 𝜋 ↔ 𝑎 𝜋 ′ , for instance, specifies an asynchronous variant of observational deter-minism, which intuitively states that a system appears to be deterministic to a low security userwho can only see propositions from the set 𝐿 . More specifically, it states that all pairs of execu-tions which agree on the atomic propositions visible to a low security observer at the beginningof their computation agree on these atomic propositions in all observable situations. Compared toits synchronous counterpart, it skips over unobservable states in both executions.Later, in Section 6 and Section 7, we adapt the two families of approximate analyses, 𝑘 -synchronousand 𝑘 -context-bounded analyses, from AAPA to 𝐻 𝜇 . We now illustrate the utility of the resultinganalyses. Since any violation of the properties 𝜑 and 𝜑 obs from above occurs after a finite num-ber of context switches, a 𝑘 -context-bounded analysis with sufficiently large 𝑘 can be used todisprove these properties. Another example is the property that one cannot locally distinguishwhether fragments of a computation belong to one trace or another, or more precisely that forevery point in 𝜋 there is a point in 𝜋 such that the next 𝑛 steps are indistinguishable. In order tomake this property’s encoding in 𝐻 𝜇 more readable, we introduce some additional syntactic sugar: G 𝜋 𝜓 ≡ 𝜈𝑋 . ( 𝜓 ∧ (cid:13) 𝜋 𝑋 ) expresses that when progressing on 𝜋 , some property 𝜓 generally holds; F 𝜋 𝜓 ≡ 𝜇𝑋 . ( 𝜓 ∨ (cid:13) 𝜋 𝑋 ) expresses that when progressing on 𝜋 , some property 𝜓 finally holds. Withthis syntactic sugar, the described property can be encoded as G 𝜋 F 𝜋 Ó 𝑖 ≤ 𝑛 Ó 𝑎 ∈ AP ((cid:13) 𝑖𝜋 𝑎 𝜋 ↔(cid:13) 𝑖𝜋 𝑎 𝜋 ) . In this example, the 𝑘 -context-bounded analyses with 𝑘 ≥ 𝑘 -synchronous analyses, let us consider the following example: imagine weexpect a property described by a formula 𝜓 on 𝑛 paths 𝜋 , ..., 𝜋 𝑛 in a synchronous manner afteran initialisation phase that takes a different number of steps on each path. Then, the formula 𝜇𝑋 . ( Ô 𝑖 ≤ 𝑛 ( 𝐼𝑛𝑖𝑡 𝜋 𝑖 ∧ (cid:13) 𝜋 𝑖 𝑋 ) ∨ ( Ó 𝑖 ≤ 𝑛 (¬ 𝐼𝑛𝑖𝑡 𝜋 𝑖 ) ∧ 𝜓 ( 𝜋 , ..., 𝜋 𝑛 ))) describes this expectation in a natural :16 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem way and can be precisely analysed by the 𝑘 -synchronous setup, if the lengths of the initialisationphases differ by at most 𝑘 . While the two of the above properties can also be expressed usingsynchronous formulas, this most likely requires exponentially larger and less intuitive formulas.For the definition of the semantics, we introduce some notation. We call a function Π : 𝑁 → Paths (K) for a set of path variables 𝑁 a path assignment and V : 𝜒 → PA → N 𝑛 a predicatevaluation, where we use PA to denote the set of all path assignments. By Π [ 𝜋 ↦→ 𝑝 ] (resp. V [ 𝑋 ↦→ 𝑀 ] ) we denote a path assignment Π ′ (resp. predicate valuation V ′ ) with Π ′ ( 𝜋 ′ ) = Π ( 𝜋 ′ ) for 𝜋 ′ ≠ 𝜋 and Π ′ ( 𝜋 ) = 𝑝 (resp. V ′ ( 𝑋 ′ ) = V ( 𝑋 ′ ) for 𝑋 ′ ≠ 𝑋 , V ′ ( 𝑋 ) = 𝑀 ). Also, we extend the notion ofword shifts according to vectors from 𝜔 -words to path assignments: given a path assignment Π binding path variables 𝜋 , ..., 𝜋 𝑛 and a vector 𝑣 = ( 𝑣 , ..., 𝑣 𝑛 ) ∈ N 𝑛 , we use Π [ 𝑣 ] to denote the pathassignment where each path 𝜋 𝑖 is shifted left according to 𝑣 𝑖 . For the definition of fixpoints, wedefine an order ⊑ on functions PA → N 𝑛 such that 𝜉 ⊑ 𝜉 ′ iff 𝜉 ( Π ) ⊆ 𝜉 ′ ( Π ) for all Π . In this way, ( PA → N 𝑛 , ⊑) forms a complete lattice with ⊥ ≡ 𝜆 Π . ∅ as its smallest element.We now define the semantics of 𝐻 𝜇 . The semantics is indexed by a 𝑘 ∈ N ∪ {∞} restricting howfar different paths can diverge from each other. In the semantics indexed by 𝑘 , we only considersituations where the foremost path is ahead of the rearmost path at most 𝑘 steps, or more formally,we restrict ourselves to index tuples from the set 𝐺 𝑛𝑘 : = {( 𝑗 , ..., 𝑗 𝑛 ) ∈ N 𝑛 |∀ 𝑖, 𝑖 ′ . | 𝑗 𝑖 − 𝑗 𝑖 ′ | ≤ 𝑘 } . Notethat for 𝑘 = ∞ , 𝐺 𝑛𝑘 contains all index tuples from N 𝑛 . We distinguish between a quantifier semanticsand a path semantics for quantified and quantifier-free formulas, respectively. We write Π | = K 𝑘 𝜑 to denote that a path assignment Π in the context of a Kripke Structure K satisfies a quantifiedformula 𝜑 in the 𝑘 -semantics. This is extended to Kripke Structures: we write K | = 𝑘 𝜑 iff {} | = K 𝑘 𝜑 for the empty path assignment {} . Path semantics on the other hand applies to quantifier-freeformulas 𝜓 with possibly free predicates. It is defined in the context of predicate valuations andcaptures which index combinations from 𝐺 𝑛𝑘 fulfill the formula for the given path assignment.We write ( 𝑗 , ..., 𝑗 𝑛 ) ∈ È 𝜑 É V 𝑘 ( Π ) for ( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 to denote that in the context of a predicatevaluation V , when we consider a path assignment Π mapping the variables 𝜋 , ..., 𝜋 𝑛 to paths 𝑝 , ..., 𝑝 𝑛 , the combination of suffixes 𝑝 [ 𝑗 ] , ..., 𝑝 𝑛 [ 𝑗 𝑛 ] satisfies the formula 𝜑 . Since no restrictionsare imposed on index tuples for 𝑘 = ∞ , we omit the subscript in this situation and write K | = ∞ 𝜑 as K | = 𝜑 , Π | = K∞ 𝜑 as Π | = K 𝜑 and È 𝜓 É V∞ as È 𝜓 É V , respectively. Definition 4.2 (Quantifier Semantics). Π | = K 𝑘 ∃ 𝜋 .𝜑 iff Π [ 𝜋 ↦→ 𝑝 ] | = K 𝑘 𝜑 for some 𝑝 ∈ 𝑃𝑎𝑡ℎ𝑠 (K) Π | = K 𝑘 ∀ 𝜋 .𝜑 iff Π [ 𝜋 ↦→ 𝑝 ] | = K 𝑘 𝜑 for all 𝑝 ∈ 𝑃𝑎𝑡ℎ𝑠 (K) Π | = K 𝑘 𝜓 iff ( , ..., ) ∈ È 𝜓 É V 𝑘 ( Π ) for some V for a quantified formula 𝜑 and a quantifier-free formula 𝜓 . Definition 4.3 (Path Semantics). È 𝑎 𝜋 𝑖 É V 𝑘 : = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 | 𝑎 ∈ 𝐿 ( Π ( 𝜋 𝑖 ) ( 𝑗 𝑖 ))}È 𝑋 É V 𝑘 : = V ( 𝑋 )È(cid:13) 𝜋 𝑖 𝜓 É V 𝑘 : = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 | ( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ È 𝜓 É V 𝑘 ( Π )}È 𝜓 ∨ 𝜓 ′ É V 𝑘 : = 𝜆 Π . È 𝜓 É V 𝑘 ( Π ) ∪ È 𝜓 ′ É V 𝑘 ( Π )Ȭ 𝜓 É V 𝑘 : = 𝜆 Π .𝐺 𝑛𝑘 \ È 𝜓 É V 𝑘 ( Π )È 𝜇𝑋 .𝜓 É V 𝑘 : = / { 𝜉 : 𝑃𝐴 → 𝐺 𝑛𝑘 | 𝜉 ⊒ È 𝜓 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 } :17 We now establish some properties of 𝐻 𝜇 ’s semantics. The first one is that the semantics of 𝜇𝑋 .𝜓 indeed characterises a fixpoint. Theorem 4.4. 𝛼 ( 𝜉 ) : = È 𝜓 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 is monotone for all 𝑘 , V , 𝑋 and 𝜓 in positive normal form. The Knaster-Tarski fixpoint theorem [Cousot and Cousot 1979; Tarski 1955] then gives a con-structive characterisation of the semantics of fixpoint formulas via ordinal numbers:
Corollary 4.5. È 𝜇𝑋 .𝜓 É V 𝑘 is the least fixpoint of 𝛼 . It can be characterised by its approximants à 𝜅 ≥ 𝛼 𝜅 (⊥) with 𝛼 ( 𝜉 ) = 𝜉 , 𝛼 𝜅 + ( 𝜉 ) = 𝛼 ( 𝛼 𝜅 ( 𝜉 )) and 𝛼 𝜆 ( 𝜉 ) = 𝜆 Π . Ð 𝜅 < 𝜆 𝛼 𝜅 ( 𝜉 ) ( Π ) where 𝜅 rangesover ordinals and 𝜆 over limit ordinals. The second property is that the 𝑘 -semantics properly approximates the full semantics. Theorem 4.6. 𝛽 ( 𝑘 ) : = È 𝜓 É V 𝑘 is monotone for all 𝜓 in positive normal form and V . Corollary 4.7.
For all Kripke Structures K , formulas 𝜑 in positive normal form and 𝑘, 𝑘 ′ ∈ N ∪{∞} with 𝑘 ≤ 𝑘 ′ , we have: K | = 𝑘 𝜑 implies K | = 𝑘 ′ 𝜑 . For the sake of reasoning about satisfiability, we define a variant of the semantics for 𝐻 𝜇 formulason traces in the straightforward way. Instead of considering path assignments Π : 𝑁 → Paths (K) ,we instead consider trace assignments Π : 𝑁 → T for a set of traces T . Existential and universalquantifiers then quantify traces 𝑡 ∈ T instead of paths 𝑝 ∈ Paths (K) . Also, atomic propositionsrequire the proposition to be included in the respective set of atomic propositions directly insteadof in the set obtained by applying the labelling function to a state. We write
T | = 𝜑 to denote thata set of traces T fulfills a formula. By choosing T = Traces (K) , the two semantics coincide. For amore formal specification of this semantics variation, we refer the reader to the appendix.Given these two variants of 𝐻 ′ 𝜇 𝑠 semantics, we consider two decision problems: • Model Checking : given a closed 𝐻 𝜇 formula 𝜑 and a Kripke Structure K , does K | = 𝜑 hold? • Satisfiability : given a closed 𝐻 𝜇 formula 𝜑 , is there a non-empty set of traces T such that T | = 𝜑 holds?For this purpose we first establish a connection between 𝐻 𝜇 and AAPA. This allows us to applythe results on AAPA that we have established already. In order to transfer results for restrictedclasses of AAPA we define corresponding fragments of the logic 𝐻 𝜇 next. Definition 4.8.
We call a 𝐻 𝜇 formula 𝑘 -synchronous for a Kripke Structure K if the followingcondition holds: K | = 𝜑 implies K | = 𝑘 𝜑 . A 0-synchronous formula is called synchronous.As a small technical detail, È(cid:13) 𝜋 𝜓 É V , i.e. the 0-semantics of (cid:13) 𝜋 𝜓 is always empty. Strictly speak-ing, this makes the above definition useless for the case 𝑘 =
0. In order to cure this defect, we allowa synchronous next operator (cid:13) 𝜓 that advances all direction simultaneously in synchronous for-mulas. Definition 4.9 (Synchronous syntactic fragment). A 𝐻 𝜇 formula belongs to the synchronous frag-ment if it uses the synchronous next operator (cid:13) instead of the indexed next operators (cid:13) 𝜋 .For the next two fragments, we need a notion of extended syntax tree of a formula 𝜑 . Thereby,we mean the (infinite) tree obtained by repeatedly substituting fixpoint variables by their respec-tive formula in the syntax tree, i.e. for every fixpoint expression 𝜇𝑋 .𝜓 (or 𝜈𝑋 .𝜓 ), replacing everyoccurrence of 𝑋 in 𝜓 with 𝜓 . It is easy to see that membership in the two syntactic fragmentsdefined next is decidable in polynomial time. Definition 4.10 ( 𝑘 -synchronous syntactic fragment). A 𝐻 𝜇 formula belongs to the 𝑘 -synchronoussyntactic fragment if the difference between the number of occurrences of (cid:13) 𝜋 and (cid:13) 𝜋 ′ for 𝜋 ≠ 𝜋 ′ on any path starting in the root of the formula’s extended syntax tree is at most 𝑘 . :18 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem Definition 4.11 ( 𝑘 -context-bounded syntactic fragment). A 𝐻 𝜇 formula is from the 𝑘 -context-boundedsyntactic fragment if on every path starting in the root of the formula’s extended syntax tree, thenumber of switches between directions 𝜋 for (cid:13) 𝜋 constructs is at most 𝑘 − In this section we establish the correspondence between AAPA and 𝐻 𝜇 , a relation that is essentialfor transferring the results from Section 3 to 𝐻 𝜇 . Crucial for this correspondence is the fact thata path assignment Π for paths 𝜋 , ..., 𝜋 𝑛 over the set of states 𝑆 can be encoded into a word overthe alphabet 𝑆 𝑛 . Then, free path variables in a formula correspond to components of the inputalphabet and free path predicates correspond to holes in the automaton, where a fitting semanticscan be plugged in once the predicate is bound. Because of this correspondence, we use the samename for a predicate and its corresponding hole in an automaton.Thus, given a path assignment Π with Π ( 𝜋 𝑖 ) = 𝑠 𝑖 𝑠 𝑖 ... , we define its translation into a word overthe alphabet S = 𝑆 𝑛 as 𝑤 Π = ( 𝑠 , ..., 𝑠 𝑛 ) ( 𝑠 , ..., 𝑠 𝑛 ) ... ∈ S 𝜔 . While viewing path assignments assuch words, we handle predicate valuations V by languages L (V ( 𝑋 𝑗 ) ( Π )) = {( 𝑤 Π [ 𝑣 ] ∈ S 𝜔 | 𝑣 ∈V ( 𝑋 𝑗 ) ( Π )} . For these languages to be well-defined, we need to restrict ourselves to well-formed valuations with the property that for all vectors 𝑣, 𝑣 ′ ∈ N 𝑛 , path assignments Π , Π ′ and predicates 𝑋 , we have that Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] implies 𝑣 ∈ V ( 𝑋 ) ( Π ) iff 𝑣 ′ ∈ V ( 𝑋 ) ( Π ′ ) . However, we can showvia induction that only valuations with this property occur during fixpoint iterations of 𝐻 𝜇 .We introduce the notion of K -equivalence: Definition 5.1 ( K -equivalence). Given a Kripke Structure K = ( 𝑆, 𝑠 , 𝛿, 𝐿 ) , a 𝐻 𝜇 formula 𝜓 over { 𝜋 , ..., 𝜋 𝑛 } with free predicates 𝑋 , ..., 𝑋 𝑚 and an alternating (asynchronous) parity automaton A with holes 𝑋 , ..., 𝑋 𝑚 over the alphabet 𝑆 𝑛 , we call A K -equivalent to 𝜓 , iff the following conditionholds: for all path assignments Π , well-formed predicate valuations V and vectors 𝑣 ∈ N 𝑛 , wehave 𝑣 ∈ È 𝜓 É V ( Π ) iff 𝑤 Π [ 𝑣 ] ∈ L (A [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) .The definition of K -equivalence is straightforwardly extended to quantified formulas 𝜑 : A iscalled K -equivalent to 𝜑 iff for all Π the statements (i) Π | = K 𝜑 and (ii) 𝑤 Π ∈ L (A) are equivalent.This notion allows us to formulate the following theorem: Theorem 5.2.
Let K be a Kripke Structure. (1) For every quantifier-free 𝐻 𝜇 formula 𝜓 in positive normal form, there is an AAPA A 𝜓 of linearsize in | 𝜓 | 𝑑 (and hence also in | 𝜓 | 𝑡 ) such that A 𝜓 is K -equivalent to 𝜓 . (2) For every AAPA A over the alphabet Σ , there is a quantifier-free 𝐻 𝜇 formula 𝜓 A with | 𝜓 A | 𝑑 linear and | 𝜓 A | 𝑡 exponential in |A| such that A is K -equivalent to 𝜓 A . We dedicate the following two subsections to the constructions underlying the proof of thistheorem. A detailed correctness proof for these constructions can be found in the appendix. 𝐻 𝜇 to AAPA: Construction for Theorem 5.2, Part 1 Intuitively, the AAPA A 𝜓 has a node for each node in 𝜓 ’s syntax DAG. For most constructs, thenode can straightforwardly check the semantics of the formula, either directly or by transitioningto nodes for subformulas in a suitable manner. The most interesting cases are those for predicatesand fixpoint expressions. As in the definition of K -equivalence, free predicates correspond to holesin the automaton and are thus translated to such. In the same definition, bound predicates do notoccur as holes in the automaton; indeed, the construction for fixpoints fills the correspondingholes. This is done by a backwards edge to the start of the automaton. Taking this backwardsedge corresponds to one unfolding of a fixpoint iteration. Depending on whether we have a leastor greatest fixpoint, the iteration can be performed a finite or infinite number of times. This is :19 captured by assigning the predicate state a priority that is lower than any other in the automatonand that is odd for least and even for greatest fixpoints.We inductively construct the AAPA A 𝜓 = ( 𝑄, 𝜌 , 𝜌, Ω ) for a formula 𝜓 . In each step, we assumethat automata for all subformulas of 𝜓 are already constructed. The construction is linear in | 𝜓 | 𝑑 since in each step at most a constant number of states is added to the already existing partialautomata and automata for syntactically (and thus semantically) equivalent subformulas can beshared. We now describe each case of the inductive construction. Recall that we assume that states true and false are present in every automaton. We therefore do not mention them explicitly. Case 𝜓 = 𝑎 𝜋 𝑖 : For atomic propositions, we let the automaton enter an accepting self-loop if the propositionis fulfilled at the current index and a rejecting self-loop otherwise. We set 𝑄 = { 𝑎 𝜋 𝑖 } and 𝜌 = 𝑎 𝜋 𝑖 . Furthermore, we let 𝜌 ( 𝑎 𝜋 𝑖 , 𝑠, 𝑑 ) = true if 𝑎 ∈ 𝐿 ( 𝑠 ) and 𝑑 = 𝑖 and let 𝜌 ( 𝑎 𝜋 𝑖 , 𝑠, 𝑑 ) = false ,otherwise. For the priority assignments, we set Ω ( 𝑎 𝜋 𝑖 ) =
1. The case 𝜓 = ¬ 𝑎 𝜋 𝑖 is analogous. Case 𝜓 = 𝑋 : A predicate is transformed into a hole in the automaton: we set 𝑄 = { 𝑋 } , 𝜌 = 𝑋 , 𝜌 ( 𝑋, 𝑠, 𝑑 ) = ⊥ and Ω ( 𝑋 ) = ⊥ . Case 𝜓 = 𝜓 ∨ 𝜓 : From the induction hypothesis, we have automata A 𝜓 𝑖 = ( 𝑄 𝑖 , 𝜌 ,𝑖 , 𝜌 𝑖 , Ω 𝑖 ) for the formulas 𝜓 𝑖 .We set 𝑄 = 𝑄 ∪ 𝑄 and 𝜌 = 𝜌 , ∨ 𝜌 , . The transition function is induced by the transitionfunctions of the automata A 𝑖 , i.e. 𝜌 ( 𝑞, 𝑠, 𝑑 ) = 𝜌 𝑖 ( 𝑞, 𝑠, 𝑑 ) for 𝑞 ∈ 𝑄 𝑖 . For the priorities, we pick Ω ( 𝑞 ) = Ω 𝑖 ( 𝑞 ) for 𝑞 ∈ 𝑄 𝑖 . This automaton accepts iff one of the automata A 𝑖 accepts. Case 𝜓 = (cid:13) 𝑖 𝜓 : From the induction hypothesis, we obtain A 𝜓 = ( 𝑄 , 𝜌 , , 𝜌 , Ω ) for 𝜓 . We then set 𝑄 = { 𝜓 } ∪ 𝑄 and 𝜌 = 𝜓 . Furthermore, we set 𝜌 ( 𝜓, 𝑠, 𝑑 ) = 𝜌 , for 𝑑 = 𝑖 and 𝜌 ( 𝜓, 𝑠, 𝑑 ) = false otherwise. For the states 𝑞 ∈ 𝑄 and priorities, we choose 𝜌 ( 𝑞, 𝑠, 𝑑 ) = 𝜌 ( 𝑞, 𝑠, 𝑑 ) , Ω ( 𝜓 ) = Ω ( 𝑞 ) = Ω ( 𝑞 ) for 𝑞 ∈ 𝑄 . The choice of priority for state 𝜓 does not matter, sinceevery run visiting this node will either visit it only a finite number of times or visit stateswith lower or equal priority infinitely often. This is ensured by the construction for fixpointformulas. Case 𝜓 = 𝜇𝑋 .𝜓 or 𝜓 = 𝜈𝑋 .𝜓 : By the induction hypothesis, we have an automaton A 𝜓 = ( 𝑄 , 𝜌 , , 𝜌 , Ω ) for 𝜓 . We canassume w.l.o.g. that there is only one hole for the path predicate 𝑋 , since all holes have thesame behaviour. Let 𝑝 : = 𝑚𝑖𝑛 { Ω ( 𝑞 𝑖 )| 𝑞 𝑖 ∈ 𝑄 } . Let 𝑝 𝑒𝑣𝑒𝑛 = 𝑝 and 𝑝 𝑜𝑑𝑑 = 𝑝 − 𝑝 is even;otherwise, let 𝑝 𝑒𝑣𝑒𝑛 = 𝑝 − 𝑝 𝑜𝑑𝑑 = 𝑝 . Intuitively, 𝑝 𝑒𝑣𝑒𝑛 (resp. 𝑝 𝑜𝑑𝑑 ) is an even (resp.odd) lower bound on the priorities occuring in A 𝜓 . For the case where the priority used inthe automaton below is negative, all priorities in the automaton are shifted by a multiple of2 such that the lowest priority is 0 or 1. We set 𝑄 = 𝑄 , 𝜌 = 𝑋 , 𝜌 ( 𝑋, 𝑠, 𝑑 ) = 𝜌 ( 𝜌 , , 𝑠, 𝑑 ) and 𝜌 ( 𝑞, 𝑠, 𝑑 ) = 𝜌 ( 𝑞, 𝑠, 𝑑 ) for 𝑞 ∈ 𝑄 \ { 𝑋 } . For the priorities, we choose Ω ( 𝑞 ) = Ω ( 𝑞 ) for 𝑞 ∈ 𝑄 \ { 𝑋 } . The state 𝑋 is assigned the priority Ω ( 𝑋 ) = 𝑝 𝑜𝑑𝑑 if 𝜓 is a 𝜇 formula and Ω ( 𝑋 ) = 𝑝 𝑒𝑣𝑒𝑛 otherwise. In this definition, we use 𝜌 ( 𝜌 , , 𝑠, 𝑑 ) to denote a variant of 𝜌 , in which every occurrence of 𝑞 is substituted by 𝜌 ( 𝑞, 𝑠, 𝑑 ) . The automaton evaluates thesubformula 𝜓 , but switches to the start in case a predicate 𝑋 is encountered. The choice ofpriority for 𝑋 reflects that an infinite unfolding of a 𝜇 -formula should lead to a rejecting run,while an infinite unfolding of a 𝜈 -formula should lead to an accepting run. :20 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem 𝐻 𝜇 : Construction for Theorem 5.2, Part 2 Given an AAPA A = ( 𝑄, 𝜌 , 𝜌, Ω ) , we construct formulas 𝜓 ℎ𝑖 by induction on 𝑖 . Our constructionis inspired by a construction from [Bozzelli 2007] in the context of a fixpoint logic for visiblypushdown languages. We fix an ordering 𝑞 , ..., 𝑞 𝑛 on states of 𝑄 such that Ω ( 𝑞 𝑖 ) ≥ Ω ( 𝑞 𝑗 ) for 𝑖 < 𝑗 for non-hole states and holes are the last 𝑚 states of the ordering. Intuitively, the formula 𝜓 ℎ describes the local behaviour of a state 𝑞 ℎ and the formula 𝜓 ℎ𝑖 expresses the existence of anaccepting run of A starting in 𝑞 ℎ , where only states with a priority higher than Ω ( 𝑞 𝑖 ) are visitedinfinitely often. For each state 𝑞 𝑖 , we introduce a predicate 𝑋 𝑖 which is bound in the 𝑖 -th step ofthe inductive construction. In the construction, 𝑖 ranges from 0 to 𝑛 − 𝑚 and ℎ ranges from 1to 𝑛 . Therefore, when the construction is finished, only the hole states of A remain as unboundpredicates in the formula and we can choose 𝜌 [ 𝑞 / 𝜓 𝑛 − 𝑚 ] ... [ 𝑞 𝑛 / 𝜓 𝑛𝑛 − 𝑚 ] as the desired formula. Construction of 𝜓 ℎ : for holes, that is for ℎ > 𝑛 − 𝑚 , the formula 𝜓 ℎ is given as the predicate 𝑋 ℎ .For non-holes, that is for ℎ ≤ 𝑛 − 𝑚 , we first construct ˆ 𝜌 ( 𝑞 ℎ , 𝜎, 𝑑 ) from 𝜌 ( 𝑞 ℎ , 𝜎, 𝑑 ) by substitutingevery occurrence of a state 𝑞 𝑙 with 𝑋 𝑙 . 𝜓 ℎ is then given as Ô 𝜎 ∈ Σ Ô 𝑑 ∈ 𝑀 ( 𝜎 𝜋 𝑑 ∧ (cid:13) 𝑑 ˆ 𝜌 ( 𝑞 ℎ , 𝜎, 𝑑 )) todescribe that some 𝜎 is currently being read in direction 𝑑 and in the next step we are in somecombination of states satisfying 𝜌 ( 𝑞 ℎ , 𝜎, 𝑑 ) after moving on in direction 𝑑 . Construction of 𝜓 ℎ𝑖 for 𝑖 > : we assume that 𝜓 ℎ𝑖 − for all ℎ is already constructed. As a firststep, we construct the formula 𝜓 𝑖𝑖 by binding the predicate 𝑋 𝑖 . We differentiate two cases based onthe priority of state 𝑞 𝑖 . If Ω ( 𝑞 𝑖 ) is odd, then 𝜓 𝑖𝑖 is given as 𝜇𝑋 𝑖 .𝜓 𝑖𝑖 − . In the other case, where Ω ( 𝑞 𝑖 ) is even, we construct 𝜓 𝑖𝑖 as 𝜈𝑋 𝑖 .𝜓 𝑖𝑖 − . Then we construct 𝜓 ℎ𝑖 for all ℎ ≠ 𝑖 by substituting 𝜓 𝑖𝑖 for everyoccurrence of the predicate 𝑋 𝑖 in the previous formula, that is: 𝜓 ℎ𝑖 = 𝜓 ℎ𝑖 − [ 𝑋 𝑖 / 𝜓 𝑖𝑖 ] . Size of the construction:
In the base case of the construction, a number of syntax DAG nodes is created that is linear inthe size of 𝜌 . In the inductive step, only a single node is added, that is the node for the fixpoint in 𝜓 𝑖𝑖 . The other formulas in this step, namely 𝜓 ℎ𝑖 for ℎ ≠ 𝑖 , can be obtained from 𝜓 ℎ𝑖 − by redirectingedges in the syntax DAG. Since only a single node is added in each step and there is a step foreach node of the automaton, | 𝑄 | nodes are added in total. Finally, when combining the formulas,at most | 𝜌 | nodes are added. Thus, we have | 𝜓 A | 𝑑 = O(| 𝜌 | + | 𝑄 | + | 𝜌 |) = O(|A|) .For the second measure of size, | 𝜓 A | 𝑡 , the formula must be represented as a syntax tree. Here,the substitution in the construction of 𝜓 ℎ𝑖 for ℎ ≠ 𝑖 cannot be performed without adding nodes tothe syntax tree since the nodes representing 𝜓 𝑖𝑖 must be duplicated. This results in a linear increasein each of the steps, which leads to an overall exponential size increase in the worst case. Since the emptiness problem for AAPA is Σ -hard and we can effectively build a 𝐻 𝜇 formula forevery AAPA and check it against a structure that generates Σ 𝜔 , we obtain: Theorem 5.3.
Model checking 𝐻 𝜇 against a Kripke model is Σ -hard. Likewise, the emptiness problem of AAPA can be reduced to the satisfiability problem for 𝐻 𝜇 : Theorem 5.4.
The satisfiability problem for 𝐻 𝜇 is Σ -hard. :21 fragment complexityfull UNDECIDABLE synchronous
SPACE ( 𝑔 ( 𝑑, | 𝜑 | 𝑑 )) -complete NSPACE ( 𝑔 ( 𝑑 − , |K |)) -complete 𝑘 -synchronous SPACE ( 𝑔 ( 𝑑 + , | 𝜑 | 𝑑 )) -complete NSPACE ( 𝑔 ( 𝑑 − , |K |)) -complete 𝑘 -context-bounded SPACE ( 𝑔 ( 𝑑 + 𝑘 − , | 𝜑 | 𝑑 )) NSPACE ( 𝑔 ( 𝑑 − , |K |)) SPACE ( 𝑔 ( ˜ 𝑑 + 𝑘 − , | 𝜑 | 𝑑 )) -hard ˜ 𝑑 = 𝑚𝑖𝑛 ( 𝑑, ) Table 1. Complexity results for model checking.
The Σ -hardness of model checking and satisfiability imply that these problems can neither be al-gorithmically solved nor can a complete approximation procedure (e.g. via bounded model check-ing) be developed for the full logic. However, the concepts of 𝑘 -synchronicity and 𝑘 -context-boundedness have already proved to lead to decidability for AAPA. We therefore consider similarlyrestricted analyses for 𝐻 𝜇 and explain how they relate to the matching analyses of AAPA.For model checking 𝐻 𝜇 against Kripke structures, two main problems have to be solved: thequantifier-free part of the input formula has to be suitably represented and the representationhas to take the quantifiers into account. As representation, we use the AAPA derived from theconstruction of subsection 5.1. The main idea for handling quantifiers is to take a nondeterministicautomaton representing the inner formula and to form a product with the Kripke structure in orderto check for the existence of appropriate paths in the structure. Additionally, the automata need tobe complemented in the case of universal quantifiers. Both 𝑘 -synchronous and 𝑘 -context-boundedAAPA are suitable for this purpose as we can dealternate them to NPA. In this way, we lift ourapproximate analyses from automata to formulas. Moreover, we even obtain precise results onseveral fragments of the logic. This stems from the following facts: Theorem 6.1. • The automaton A 𝜓 for a synchronous 𝐻 𝜇 formula 𝜓 can be transformed intoan APA of asymptotically the same size. • The automaton A 𝜓 for a 𝑘 -synchronous 𝐻 𝜇 formula 𝜓 is a 𝑘 -synchronous AAPA. • The automaton A 𝜓 for a 𝑘 -context-bounded 𝐻 𝜇 formula 𝜓 is a 𝑘 -context-bounded AAPA. Let us first consider the 𝑘 -synchronous analysis. It can be applied to formulas by using thefollowing fact: Theorem 6.2.
For a quantifier-free formula 𝜓 , a path assignment Π and a well-formed predicatevaluation V , the automaton A 𝜓 from Theorem 5.2 has a 𝑘 -synchronous accepting run over 𝑤 Π [ 𝑣 ] with holes filled according to V iff 𝑣 ∈ È 𝜓 É V 𝑘 ( Π ) . This generalises the definition of K -equivalence. The proof relies on the observation that eachnode in A 𝜓 ’s runs corresponds to a subformula of 𝜓 and that for each such subformula, the offsetcounters that appear in an accepting run correspond to the tuples in the semantics of 𝜓 . Thus, therestriction induced by the 𝑘 -semantics and 𝑘 -synchronous runs deliver the same tuples.Theorem 6.2 implies that we can use the 𝑘 -synchronous analysis underlying Theorem 3.12 onthe automaton A 𝜓 for a formula 𝜓 to determine that formula’s 𝑘 -semantics. Since the approxi-mations given by the 𝑘 -semantics improve with increasing 𝑘 for quantifier-free (Theorem 4.6) as More precisely, the construction is found in the appendix of an extended version of that paper that was provided to us bythe author. While the paper states that the construction can be performed in linear time and thus with at most linear sizeincrease, it is not discernable how the construction can be performed without an at worst exponential blowup when usingthe syntax tree of a formula as the basis for measuring its size. :22 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem well as quantified (Corollary 4.7) formulas, this procedure yields increasingly precise analyses ofarbitrary 𝐻 𝜇 formulas. Indeed, any approximate analysis parameterised in some 𝑘 on AAPA thatsupplies us, for increasing 𝑘 , with an increasing number of offsets 𝑣 from which there is an accept-ing run for a fixed 𝑤 Π can be used to approximate 𝐻 𝜇 as well. This is due to the fact that such ananalysis directly corresponds to a monotone semantics approximator as in Theorem 4.6 and is thusapplicable in a proof like the one from Corollary 4.7. In particular, this applies to the 𝑘 -context-bounded analysis underlying Theorem 3.17 using the above argument about the correspondencebetween offset counters and tuples in the semantics. Given a 𝑘 -synchronous or 𝑘 -context-bounded AAPA for a quantified formula, the quantifier canbe handled in the same way as for HyperLTL in [Finkbeiner et al. 2015]. Construction for existential quantifiers: we perform the construction for a formula ∃ 𝜋 𝑛 + .𝜑 and a structure K = ( 𝑆, 𝑠 , 𝛿, 𝐿 ) . Our construction simulates one input component of A 𝜑 in thestate space of A ∃ 𝜋𝑛 + .𝜑 . In order to allow us to do so, we assume A 𝜑 to be given as an NPA. For thefragments we consider, a translation to NPA is possible, as seen in Section 3. Therefore, A 𝜑 has theform ( 𝑄 , 𝑞 , , 𝜌 , Ω ) and input alphabet 𝑆 𝑛 + . We construct the NPA A ∃ 𝜋 𝑛 + .𝜑 = ( 𝑄, 𝑞 , 𝜌, Ω ) withinput alphabet 𝑆 𝑛 , states 𝑄 = 𝑄 × 𝑆 , initial state 𝑞 = ( 𝑞 , , 𝑠 ) , transition function 𝜌 (( 𝑞, 𝑠 ) , s ) = {( 𝑞 ′ , 𝑠 ′ ) ∈ 𝑄 × 𝑆 | 𝑞 ′ ∈ 𝜌 ( 𝑞, s + 𝑠 ) , 𝑠 ′ ∈ 𝛿 ( 𝑠 )} , and priorities Ω (( 𝑞, 𝑠 )) = Ω ( 𝑞 ) .As mentioned, the last component of S is now simulated in the last component of the state spaceof A ∃ 𝜋 𝑛 + .𝜑 . Simultaneously this component makes sure that transitions are taken according to thetransition function 𝛿 of K . Choosing ( 𝑞 , , 𝑠 ) as the starting state ensures that the path which issimulated in this way starts in the initial state 𝑠 of K . Construction for universal quantifiers: in order to handle universal quantifiers, we comple-ment the automaton A 𝜑 by swapping ∧ and ∨ in transitions and increasing all priorities by one.We then construct the automaton ∃ 𝜋 𝑛 + . ¬ 𝜑 , and complement it again by the same procedure. Notethat when several of these constructions are combined, double negations can be cancelled out toavoid unnecessary complementation constructions. In order to analyse the size of the resulting automata, we need a notion of alternation depth, whichis defined as the number of switches between ∃ and ∀ quantifiers in the quantifier-prefix of theformula. Since our quantifier construction involves transforming the automaton into an NPA, weformulate our evaluation of A 𝜑 ’s size for the resulting NPA: Lemma 6.3.
The NPA A 𝜑 for a closed formula 𝜑 with alternation depth 𝑑 has size: • O( 𝑔 ( 𝑑, |K | · 𝑔 ( , | 𝜑 | 𝑑 ))) in a synchronous analysis, • O( 𝑔 ( 𝑑, |K | · 𝑔 ( , | 𝜑 | 𝑑 ))) in a 𝑘 -synchronous analysis, and • O( 𝑔 ( 𝑑, |K | · 𝑔 ( 𝑘 − , | 𝜑 | 𝑑 ))) in a 𝑘 -context-bounded analysis. For the decision problems corresponding to our analyses, we obtain the following complexityresults. The upper bounds also apply for the respective analyses.
Theorem 6.4.
Model checking a closed synchronous 𝐻 𝜇 formula 𝜑 with alternation depth 𝑑 againsta Kripke Structure is complete for SPACE ( 𝑔 ( 𝑑, | 𝜑 | 𝑑 )) , SPACE ( 𝑔 ( 𝑑, | 𝜑 | 𝑡 )) and NSPACE ( 𝑔 ( 𝑑 − , |K |)) . For the case of 𝑑 = , we use the definition from [Finkbeiner et al. 2015], where NSPACE ( 𝑔 (− , 𝑛 )) was defined as NLOGSPACE . For 𝑑 > , we can use Savitch’s Theorem to see that the problems are actually complete for the deterministicspace classes. :23 Proof.
The upper bounds follow from Lemma 6.3 and the
NLOGSPACE complexity of empti-ness tests on NPA. For hardness, we can reduce from HyperLTL Model Checking [Rabe 2016] toobtain the desired results. (cid:3)
Theorem 6.5.
Model checking a closed 𝑘 -synchronous 𝐻 𝜇 formula 𝜑 with alternation depth 𝑑 against a Kripke Structure is complete for SPACE ( 𝑔 ( 𝑑 + , | 𝜑 | 𝑑 )) and NSPACE ( 𝑔 ( 𝑑 − , |K |)) . Proof.
The upper bounds follow from Lemma 6.3 and the
NLOGSPACE complexity of empti-ness tests on NPA.For hardness, we reduce from the acceptance problem for 𝑔 ( 𝑘, 𝑝 ( 𝑛 )) space bounded determin-istic Turing machines. The reduction is based on Sistla’s classical yardstick construction for thesatisfiability problem of Quantified Propositional Temporal Logic (QPTL) [Sistla 1983] and theadaptation of that reduction to HyperLTL [Rabe 2016]. More concretely, in the reduction for Hyper-LTL, a formula 𝜑 𝑘,𝑚 ( 𝑃 𝑥 , 𝑃 𝑦 ) is constructed such that 𝑃 𝑥 and 𝑃 𝑦 are true at exactly one point and thepoints at which both propositions are fulfilled are exactly 𝑁 𝑘,𝑚 steps away where 𝑁 𝑘,𝑚 ≥ 𝑔 ( 𝑘, 𝑚 ) .While in that reduction, an alternation free formula is constructed for a polynomial 𝑁 𝑘,𝑚 , we caninstead construct a quantifier free formula for an exponential 𝑁 𝑘,𝑚 by building a 2-synchronousAAPA as in the proof of Theorem 3.12 that can check for two propositions whether they are ex-ponentially many indices away from each other. For this AAPA, we can obtain a 2-synchronous 𝐻 𝜇 formula in polynomial time. We can then inductively construct the formulas as in the prooffor HyperLTL and translate it to 𝐻 𝜇 . Note that since HyperLTL progresses on different paths syn-chronously, the inductive construction can easily be adapted to preserve 2-synchronicity of theformula. Accordingly, we need one quantifier alternation less in our 𝐻 𝜇 formula than in the corre-sponding HyperLTL or QPTL formulas to build a yardstick of length 𝑁 𝑘,𝑚 and obtain the desiredlower bound.The hardness claim for fixed formulas can be obtained by a direct reduction from HyperLTLmodel checking [Rabe 2016]. (cid:3) By the space hierarchy theorem, the completeness for
SPACE ( 𝑔 ( 𝑑 + , | 𝜑 | 𝑑 )) implies that whenmeasuring size of formulas 𝜑 by | · | 𝑡 , at least space O( 𝑔 ( 𝑑, | 𝜑 | 𝑡 )) is needed. Note that this does notimply hardness for the corresponding class since the implication is based on an exponential timereduction, but hardness requires polynomial time reductions. A similar reasoning applies to latertheorems in which completeness results based on | · | 𝑑 are given. Theorem 6.6.
Model checking a closed 𝑘 -context-bounded 𝐻 𝜇 formula 𝜑 with alternation depth 𝑑 against a Kripke Structure is in SPACE ( 𝑔 ( 𝑑 + 𝑘 − , | 𝜑 | 𝑑 )) and NSPACE ( 𝑔 ( 𝑑 − , |K |)) and is hardfor SPACE ( 𝑔 ( ˜ 𝑑 + 𝑘 − , | 𝜑 | 𝑑 )) where ˜ 𝑑 = 𝑚𝑖𝑛 ( 𝑑, ) . Proof.
The upper bounds follow from Lemma 6.3 and the
NLOGSPACE complexity of empti-ness tests on NPA. For hardness, we show the claim for 𝑑 = 𝑑 ≥ 𝑑 =
0, we reduce from the emptiness problem for 𝑘 -context-boundedAAPA. More specifically, an AAPA A is non-empty iff K | = ∃ 𝜋 . . . ∃ 𝜋 𝑛 .𝜓 A for the structure K whose traces correspond to arbitrary words over A ’s input alphabet.In the second case, where 𝑑 ≥
1, we reduce from the acceptance problem for 𝑔 ( 𝑘 − , 𝑝 ( 𝑛 )) space bounded deterministic Turing machines. Similar to the proof of Theorem 3.17, we show amore general result about regular transductions on level 𝑘 − 𝑛 ,we construct a Kripke Structure K and a formula 𝜑 such that K | = 𝜑 iff there is a sequence of level 𝑘 − 𝑤 , 𝑤 , ..., 𝑤 𝑙 such that(a) 𝑤 is accepted by the first regular language acceptor, :24 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem (b) 𝑤 𝑙 is accepted by the second regular language acceptor, and(c) for all 𝑖 < 𝑙 , 𝑤 𝑖 + is obtained from 𝑤 𝑖 by applying the regular transducer T = ( 𝑄, 𝑞 𝑜 , 𝛾 ) .As the first step, we construct the Kripke Structure K . Its set of atomic propositions is given as 𝐴𝑃 = { , } ∪ {[ 𝑖 , ] 𝑖 | 𝑖 ≤ 𝑘 } ∪ 𝑄 , where { , } is the alphabet for the word itself, {[ 𝑖 , ] 𝑖 | 𝑖 ≤ 𝑘 } isused for the Stockmeyer encodings and 𝑄 is the set of states of the regular transducer. For this setof atomic propositions, we choose K as the structure that produces arbitrary traces over 𝐴𝑃 .To make the formula 𝜑 more readable, we introduce some modalities as syntactic sugar that caneasily be encoded in 𝐻 𝜇 . First, we use the LTL-like modality G 𝜋 that was previously defined aswell as the Until-modality U 𝜋 that can be defined analogously. Also, we introduce two variationsof (cid:13) 𝜋 namely (cid:13) 𝑠𝑦𝑚𝑏𝑜𝑙𝜋 and (cid:13) 𝑤𝑜𝑟𝑑𝜋 . In a sequence of Stockmeyer encoded words as inspected here, (cid:13) 𝑠𝑦𝑚𝑏𝑜𝑙𝜋 𝜓 expects 𝜓 to hold at the next encoded symbol and (cid:13) 𝑤𝑜𝑟𝑑𝜋 𝜓 expects 𝜓 to hold at the startof the next encoded word in the sequence. These modalities can straightforwardly be encoded in 𝐻 𝜇 . Finally, we use (cid:13) 𝑤𝑜𝑟𝑑𝜋 𝜓 to encode modalities G 𝑤𝑜𝑟𝑑𝜋 𝜓 and F 𝑤𝑜𝑟𝑑𝜋 𝜓 which express that 𝜓 holdsfor each or one encoded word on 𝜋 , respectively.To increase readability of the formula even further, we also introduce some auxiliary formulas: • 𝜓 𝑘 − 𝑠𝑡𝑜𝑐𝑘 ( 𝜋, 𝜋 ′ ) is obtained from Lemma 3.18 and expresses that the next word of length 𝑔 ( 𝑘 − , 𝑝 ( 𝑛 )) on 𝜋 represents a level 𝑘 − 𝜋 ′ and 𝑘 − • 𝜓 𝑘 − 𝑠𝑎𝑚𝑒 ( 𝜋, 𝜋 ′ ) is obtained from Lemma 3.19 and expresses that the first level 𝑘 − 𝜋 and 𝜋 ′ match each other with 𝑘 − 𝜋 context) • 𝜓 𝑠𝑡𝑎𝑟𝑡 ( 𝜋 ) and 𝜓 𝑒𝑛𝑑 ( 𝜋 ) state that the first encoded word on 𝜋 is accepted by the first andsecond regular language acceptor respectively • 𝜓 𝑒𝑛𝑐 ( 𝜋 ) checks that 𝜋 respects a specific encoding. This means: – On every index, exactly one atomic proposition from { , } ∪ {[ 𝑖 , ] 𝑖 | 𝑖 ≤ 𝑘 } is true – On every index of an encoded word, exactly one atomic proposition from 𝑄 is true – The first 𝑄 -poposition in every encoded word is 𝑞 Now we are able to construct the formula 𝜑 : ∃ 𝜋 𝑠𝑒𝑞 . ∃ 𝜋 ℎ𝑒𝑙𝑝 . ∀ 𝜋 𝑖𝑛𝑑𝑒𝑥 . 𝜓 𝑘 − 𝑠𝑡𝑜𝑐𝑘 ( 𝜋 𝑖𝑛𝑑𝑒𝑥 , 𝜋 ℎ𝑒𝑙𝑝 ) → ( ) 𝜓 𝑒𝑛𝑐 ( 𝜋 𝑠𝑒𝑞 ) ∧ G 𝑤𝑜𝑟𝑑𝜋 𝑠𝑒𝑞 𝜓 𝑘 − 𝑠𝑡𝑜𝑐𝑘 ( 𝜋 𝑠𝑒𝑞 , 𝜋 ℎ𝑒𝑙𝑝 ) ∧ ( ) 𝜓 𝑠𝑡𝑎𝑟𝑡 ( 𝜋 𝑠𝑒𝑞 ) ∧ F 𝑤𝑜𝑟𝑑𝜋 𝑠𝑒𝑞 𝜓 𝑒𝑛𝑑 ( 𝜋 𝑠𝑒𝑞 ) ∧ ( )G 𝜋 𝑠𝑒𝑞 𝜓 𝑘 − 𝑠𝑎𝑚𝑒 ( 𝜋 𝑠𝑒𝑞 , 𝜋 𝑖𝑛𝑑𝑒𝑥 ) →(cid:13) 𝑠𝑦𝑚𝑏𝑜𝑙𝜋 𝑠𝑒𝑞 Ü 𝑖 ∈{ , } ,𝑞 ∈ 𝑄 𝑖 𝜋 𝑠𝑒𝑞 ∧ 𝑞 𝜋 𝑠𝑒𝑞 ∧ Ü ( 𝑞 ′ ,𝑖 ′ ) ∈ 𝛾 ( 𝑞,𝑖 ) (cid:13) 𝑠𝑦𝑚𝑏𝑜𝑙𝜋 𝑠𝑒𝑞 𝑞 ′ 𝜋 𝑠𝑒𝑞 ∧ ( )(cid:13) 𝑤𝑜𝑟𝑑𝜋 𝑠𝑒𝑞 ¬ 𝜓 𝑘 − 𝑠𝑎𝑚𝑒 ( 𝜋 𝑠𝑒𝑞 , 𝜋 𝑖𝑛𝑑𝑒𝑥 )U 𝜋 𝑠𝑒𝑞 𝜓 𝑘 − 𝑠𝑎𝑚𝑒 ( 𝜋 𝑠𝑒𝑞 , 𝜋 𝑖𝑛𝑑𝑒𝑥 ) ∧ (cid:13) 𝑠𝑦𝑚𝑏𝑜𝑙𝜋 𝑠𝑒𝑞 𝑖 ′ 𝜋 𝑠𝑒𝑞 Intuitively, this formula quantifying the three paths 𝜋 𝑠𝑒𝑞 , 𝜋 ℎ𝑒𝑙𝑝 and 𝜋 𝑖𝑛𝑑𝑒𝑥 can be understood inthe following way: on 𝜋 𝑠𝑒𝑞 , we check for the existence of the sequence of Stockmeyer encodedwords. We use 𝜋 ℎ𝑒𝑙𝑝 as a path yielding indices of lower level encodings to be used for checkingthe presence of Stockmeyer encodings as in Lemma 3.18 or more specifically the translation of theAAPA referred to in that lemma into 𝐻 𝜇 . The path 𝜋 𝑖𝑛𝑑𝑒𝑥 is universally quantified and is used toobtain all Stockmeyer indices for a level 𝑘 encoding. The condition in ( ) ensures that it is indeeda level 𝑘 − { , } represents an index for alevel 𝑘 encoding. In ( ) , we ensure that 𝜋 𝑠𝑒𝑞 has the required encoding. Part ( ) of the formula :25 fragment complexityfull UNDECIDABLE alternation free synchronous
PSPACE -completealternation free 𝑘 -synchronous EXPSPACE -completealternation free 𝑘 -context-bounded ( k − ) EXPSPACE -complete ∃ ∗ ∀ ∗ synchronous EXPSPACE -complete ∃ ∗ ∀ ∗ 𝑘 -synchronous EXPSPACE -complete ∃ ∗ ∀ ∗ 𝑘 -context-bounded ( k − ) EXPSPACE -complete
Table 2. Complexity results for satisfiability when representing a formula with 𝑟𝑒𝑝 𝑑 (·) . expresses the two conditions ( 𝑎 ) and ( 𝑏 ) that the sequence of words must fulfill. Finally, ( ) ex-presses condition ( 𝑐 ) of the sequence, i.e. that each word is obtained from the previous one byapplying the regular transducer. This is done by checking the transductions for each index of thewords separately (which is achieved by the universal quantification over 𝜋 𝑖𝑛𝑑𝑒𝑥 ): whenever we findthe index we are currently checking, we determine the current symbol of the word and the statethe transducer is in. We choose one of T ’s possible transitions for the current symbol and stateto determine its output and next state. The next state is then expected at the next symbol of thecurrent encoded word. The output on the other hand is expected the next time this index occursin the sequence. (cid:3) A few remarks are in order about these last results.First, since the formula constructed for the reduction for Theorem 6.6 works in a similar wayas the automaton from Lemma 3.18, one could ask whether the hardness estimates can be induc-tively lifted to ˜ 𝑑 = 𝑑 in a similar way as was done in Lemma 3.18. This would require the formula 𝜓 𝑠𝑎𝑚𝑒 used in our construction to handle longer nested index encodings. As the number of con-text switches can not be increased further, this seems to require quantification. Indeed, by addingsuitable prenex alternating quantifiers, a corresponding formula could be defined. However, thisformula could no longer be used inside fixpoints since this would lead to formulas with non-prenexquantifiers and it is unclear whether such alternating quantifiers could be moved to the front ofthe formula without changing the semantics.Secondly, while the inability to inductively lift the hardness result to an arbitrary number ofquantifier alternations raises the question whether quantifier alternations increase the complexityof a context bounded analysis at all, our result shows that this is at least the case for the first one.A complete answer to this question remains for future work.Finally, the 𝑘 -synchronous and 𝑘 -context-bounded analyses can be combined by interpreting thequantifier free part of the given formula as a boolean combination of subformulas each of which isanalysed with one of these analyses. Our setup can easily handle this since the resulting automatafor the analyses of these formulas are all synchronous automata which can be combined straight-forwardly. In this setting, the hardness proof for the 𝑘 -context-bounded analysis of automata canbe lifted with quantifier alternations in the same way as in the proof of Theorem 6.5 since the sub-formulas added in the inductive step of the proof are synchronous. This yields a hardness resultfor the combined analysis that matches the upper bound for the 𝑘 -context-bounded analysis. Since certain combinations of path quantifiers lead to undecidability already for synchronous hy-perlogics like HyperLTL [Finkbeiner and Hahn 2016], we consider satisfiability for formulas with :26 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem restricted quantifier prefixes only. We say that a formula is in the ∃ ∗ fragment of 𝐻 𝜇 if it only has ∃ quantifiers in its quantifier prefix. The ∀ ∗ fragment is definied analogously. Furthermore, we saythat a formula is in the ∃ ∗ ∀ ∗ fragment if its quantifier prefix has the form ∃ 𝜋 ... ∃ 𝜋 𝑛 ∀ 𝜋 ′ ... ∀ 𝜋 ′ 𝑚 . Notethat the same restrictions have been considered in [Finkbeiner and Hahn 2016] for HyperLTL inorder to obtain decidable fragments. However, as our proof of Theorem 5.4 shows, a quantifier pre-fix with just existential quantifiers suffices for undecidability in the case of 𝐻 𝜇 , if the quantifier-freepart of the formula is not restricted or approximated as well. Thus, just like for model checking,we make use of the approximate analyses developed in Section 3 for the quantifier-free part ofthe formula. We now present complexity results for the decision problems corresponding to theresulting approximate analyses. The upper bounds apply to these analyses as well. An overviewcan be found in Table 2. Theorem 7.1.
The satisfiability problem for the ∃ ∗ - and ∀ ∗ -fragment of (1) synchronous 𝐻 𝜇 is PSPACE -complete (2) 𝑘 -synchronous 𝐻 𝜇 is EXPSPACE -complete (3) 𝑘 -context-bounded 𝐻 𝜇 is ( k − ) EXPSPACE -completewhen using 𝑟𝑒𝑝 𝑑 (·) to represent formulas. The statement in (1) also holds for 𝑟𝑒𝑝 𝑡 (·) . Proof.
Using Theorem 5.2, the three problems become interreducible to the emptiness prob-lems for the corresponding AAPA restrictions. This is done in the following way:(i) An ∃ ∗ formula ∃ 𝜋 ... ∃ 𝜋 𝑛 .𝜓 is satisfiable iff A 𝜓 is non-empty. (ii) A ∀ ∗ formula ∀ 𝜋 ... ∀ 𝜋 𝑛 .𝜓 is satisfiable iff A ¬ 𝜓 is empty. (iii) An AAPA A is non-empty iff the formula ∃ 𝜋 ... ∃ 𝜋 𝑛 .𝜓 A issatisfiable. (iv) An AAPA A is empty iff the formula ∀ 𝜋 ... ∀ 𝜋 𝑛 . ¬ 𝜓 A is satisfiable.This yields both upper and lower bounds for each of the problems using 𝑟𝑒𝑝 𝑑 (·) as representa-tion. The hardness result for item (1) and 𝑟𝑒𝑝 𝑡 (·) can be obtained by reducing from the satisfiabilityproblem for alternation-free HyperLTL instead. (cid:3) Theorem 7.2.
The satisfiability problem for the ∃ ∗ ∀ ∗ -fragment of (1) synchronous 𝐻 𝜇 is EXPSPACE -complete (2) 𝑘 -synchronous 𝐻 𝜇 is EXPSPACE -complete (3) 𝑘 -context-bounded 𝐻 𝜇 is ( k − ) EXPSPACE -completewhen using 𝑟𝑒𝑝 𝑑 (·) to represent formulas. Proof.
For the upper bounds, we adapt the idea from [Finkbeiner and Hahn 2016] to test satisfi-ability of HyperLTL formulas only with a minimal set of traces: the set of traces chosen for the exis-tential quantifiers. In such a set, the universal quantifiers can be instantiated in every possible com-bination and thus eliminated. More specifically, a HyperLTL formula ∃ 𝜋 ... ∃ 𝜋 𝑛 ∀ 𝜋 ′ ... ∀ 𝜋 ′ 𝑚 .𝜓 is trans-formed into the equisatisfiable formula ∃ 𝜋 ... ∃ 𝜋 𝑛 . Ó 𝑛𝑗 = ... Ó 𝑛𝑗 𝑚 = 𝜓 [ 𝜋 𝑗 / 𝜋 ′ ] ... [ 𝜋 𝑗 𝑚 / 𝜋 ′ 𝑚 ] , which canthen be tested for satisfiability using the same method as for ∃ ∗ formulas.However, in our setting, a direct instantiation of the universal quantifiers via substitution ispossible only for synchronous 𝐻 𝜇 formulas since tests for atomic propositions can occur withdifferent offsets otherwise. For 𝑘 -synchronous and 𝑘 -context-bounded formulas, we have to incor-porate the conjunctive test for each of these arrangements , i.e. each set of substitutions, directlyinto our analysis of the corresponding automata.In the case of 𝑘 -synchronous formulas ∃ 𝜋 ... ∃ 𝜋 𝑛 ∀ 𝜋 ′ ... ∀ 𝜋 ′ 𝑚 .𝜓 , we first create 𝑛 𝑚 copies of A 𝜓 on 𝑛 + 𝑚 input words, one for each arrangement. Then, for each arrangement 𝑎 , we transform onecopy of the AAPA into an APA A 𝜓 ( 𝑎 ) with 𝑛 input directions and size O(| 𝜓 | 𝑑 · | Σ | 𝑛 · 𝑘 ) . This isdone using a variation of the procedure from Theorem 3.11 which eliminates 𝑚 input directionsand substitutes the corresponding checks according to the arrangement 𝑎 . When we substitute a :27 path 𝜋 𝑖 with a path 𝜋 𝑗 , we have to make sure that all moves that were previously performed ondirection 𝑗 are now performed on direction 𝑖 without manipulating the moves that were previouslymade on 𝑖 . Thus, we introduce a second input marker for 𝜋 𝑗 on direction 𝑖 in the 𝑘 · 𝑛 window fromthe proof of Theorem 3.11. It is advanced whenever a symbol from direction 𝑗 is read and does notaffect the other markers on the same direction. These additional 𝑚 markers do not asymptoticallyincrease the size of the construction, thus we obtain the previously mentioned size. To perform thesatisfiability test for the original formula, we now perform an emptiness test on the APA that makesa conjunctive move to A 𝜓 ( 𝑎 ) for all arrangements 𝑎 . Since there are 𝑛 𝑚 different arrangements,this APA has size O( 𝑛 𝑚 · | 𝜓 | 𝑑 · | Σ | 𝑘 · 𝑛 ) and the test can be performed in EXPSPACE .For a 𝑘 -context-bounded analysis, we also have to incorporate the conjunctive test for all ar-rangements into the analysis. We first construct the structure S( 𝑔 ) for each guess 𝑔 without iden-tifying any directions. We combine this structure with an arrangement 𝑎 by replacing the testfrom Lemma 3.15 with Ó 𝑑 ′ / 𝑑 ∈ 𝑎 Ó { 𝑔 ′ ∈ 𝐹 𝑑 ′ ({ 𝑞 , 𝑔 })| 𝑞 ∈ 𝑄 } for all 𝑑 and some 𝑄 with 𝑄 | = 𝜌 .Here, we use 𝑑 ′ / 𝑑 ∈ 𝑎 to denote that in the arrangement 𝑎 , 𝑑 ′ is substituted by 𝑑 . When us-ing this notation, we assume that every arrangement contains the substitution 𝑑 / 𝑑 such that theoriginal paths are still considered. This integration of 𝑎 into S( 𝑔 ) to obtain S( 𝑎, 𝑔 ) does not in-crease its size beyond O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) asymptotically. When combining the tests for all arrange-ments 𝑎 , . . . , 𝑎 𝑛 𝑚 , we have to keep in mind that each test can use a different guess. Thus, weconstruct APA S( 𝑔 , ..., 𝑔 𝑛 𝑚 ) that are parameterised in guesses 𝑔 , ..., 𝑔 𝑛 𝑚 and conjunctively moveinto S( 𝑎 𝑖 , 𝑔 𝑖 ) . Since they consist of 𝑛 𝑚 APA of size O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) and an initial state, they asymp-totically have size O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) as well. We remove alternation from these parameterised APAto obtain NPA S ′ ( 𝑔 , ..., 𝑔 𝑛 𝑚 ) of size O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) . The final NPA, which we test for emptiness tosolve the satisfiability problem, nondeterministically guesses 𝑔 to 𝑔 𝑛 𝑚 and moves to S ′ ( 𝑔 , ..., 𝑔 𝑛 𝑚 ) .Since there are | 𝐺 | = O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) possible guesses, we have | 𝐺 | 𝑛 𝑚 = O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) possiblecombinations of guesses. Thus, the final NPA also has an asymptotical size of O( 𝑔 ( 𝑘 − , | 𝜓 | 𝑑 )) .This yields an emptiness test in ( k − ) EXPSPACE .For the lower bound, we need two different reductions. A reduction from ∃ ∗ ∀ ∗ HyperLTL sat-isfiability yields the first and second lower bound. The third one is obtained from the fact that a 𝑘 -context-bounded ∃ ∗ formula is especially an ∃ ∗ ∀ ∗ formula. (cid:3) In this paper, we introduced Alternating Asynchronous Parity Automata (AAPA) and the novelfixpoint logic 𝐻 𝜇 as tools for the analysis of asynchronous hyperproperties. We showed the mostinteresting decision problems for both models to be highly undecidable in general, but exhibitedfamilies of increasingly precise under- and overapproximations for both AAPA and 𝐻 𝜇 and pre-sented asymptotically optimal algorithms for most corresponding decision problems. We also iden-tified syntactic fragments where these analyses yield precise results.Several questions remain for future work. Firstly, while we have established an equivalencebetween AAPA and 𝐻 𝜇 formulas over fixed path assignments, an interesting question is whetherthere is a natural model of tree automata possibly extending AAPA and equivalent to the full logicwith quantifiers, analogous to the correspondence between the modal 𝜇 -calculus and AlternatingParity Tree Automata [Emerson and Jutla 1991]. This could possibly lead to a more direct automata-theoretic approach to 𝐻 𝜇 model checking. Secondly, it would be interesting to identify furtherapproximate analyses and corresponding decidable fragments. :28 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem ACKNOWLEDGMENTS
This work was partially funded by DFG project Model-Checking of Navigation Logics (MoNaLog)(MU 1508/3). We thank the reviewers for their helpful comments and Roland Meyer and Sören vander Wall for valuable discussions. We also thank Laura Bozzelli for providing us with an extendedversion of [Bozzelli 2007].
REFERENCES
Henrik Reif Andersen. 1994.
A polyadic modal 𝜇 -calculus . Technical Report ID-TR: 1994-195. Dept. of Computer Science,Technical University of Denmark, Copenhagen.Mohamed Faouzi Atig, Ahmed Bouajjani, and Shaz Qadeer. 2009. Context-Bounded Analysis for Concurrent Programs withDynamic Creation of Threads. In Tools and Algorithms for the Construction and Analysis of Systems, 15th InternationalConference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009,York, UK, March 22-29, 2009. Proceedings (Lecture Notes in Computer Science) , Stefan Kowalewski and Anna Philippou(Eds.), Vol. 5505. Springer, 107–123. https://doi.org/10.1007/978-3-642-00768-2_11Kshitij Bansal and Stéphane Demri. 2013. Model-Checking Bounded Multi-Pushdown Systems. In
Computer Science - Theoryand Applications - 8th International Computer Science Symposium in Russia, CSR 2013, Ekaterinburg, Russia, June 25-29,2013. Proceedings . 405–417. https://doi.org/10.1007/978-3-642-38536-0_35Howard Barringer, Ruurd Kuiper, and Amir Pnueli. 1986. A Really Abstract Concurrent Model and its Temporal Logic.In
Conference Record of the Thirteenth Annual ACM Symposium on Principles of Programming Languages, St. PetersburgBeach, Florida, USA, January 1986 . ACM Press, 173–183. https://doi.org/10.1145/512644.512660Laura Bozzelli. 2007. Alternating Automata and a Temporal Fixpoint Calculus for Visibly Pushdown Languages. In
CONCUR2007 – Concurrency Theory , Luís Caires and Vasco T. Vasconcelos (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg,476–491.Laura Bozzelli, Bastien Maubert, and Sophie Pinchinat. 2015. Unifying hyper and epistemic temporal logics. In
InternationalConference on Foundations of Software Science and Computation Structures . Springer, 167–182.Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, and César Sánchez. 2014.Temporal Logics for Hyperproperties. In
Principles of Security and Trust , Martín Abadi and Steve Kremer (Eds.). SpringerBerlin Heidelberg, Berlin, Heidelberg, 265–284.Michael R. Clarkson and Fred B. Schneider. 2010. Hyperproperties.
J. Comput. Secur.
18, 6 (Sept. 2010), 1157–1210.http://dl.acm.org/citation.cfm?id=1891823.1891830Norine Coenen, Bernd Finkbeiner, Christopher Hahn, and Jana Hofmann. 2019. The Hierarchy of Hyperlogics. In . 1–13.https://doi.org/10.1109/LICS.2019.8785713Patrick Cousot and Radhia Cousot. 1979. Constructive versions of Tarski’s fixed point theorems.
Pacific J. Math.
82, 1(1979), 43–57. https://projecteuclid.org:443/euclid.pjm/1102785059Christian Dax and Felix Klaedtke. 2008. Alternation elimination by complementation. In
International Conference on Logicfor Programming Artificial Intelligence and Reasoning . Springer, 214–229.Stéphane Demri, Valentin Goranko, and Martin Lange. 2016.
Temporal Logics in Computer Science: Finite-State Systems .Cambridge University Press. https://doi.org/10.1017/CBO9781139236119Antoine Durand-Gasselin, Javier Esparza, Pierre Ganty, and Rupak Majumdar. 2015. Model Checking Parameterized Asyn-chronous Shared-Memory Systems. In
Computer Aided Verification - 27th International Conference, CAV 2015, San Fran-cisco, CA, USA, July 18-24, 2015, Proceedings, Part I (Lecture Notes in Computer Science) , Daniel Kroening and Corina S.Pasareanu (Eds.), Vol. 9206. Springer, 67–84. https://doi.org/10.1007/978-3-319-21690-4_5E. Allen Emerson and Charanjit S. Jutla. 1991. Tree Automata, Mu-Calculus and Determinacy (Extended Abstract).In . 368–377.https://doi.org/10.1109/SFCS.1991.185392Javier Esparza, Pierre Ganty, and Rupak Majumdar. 2016. Parameterized Verification of Asynchronous Shared-MemorySystems.
J. ACM
63, 1 (2016), 10:1–10:48. https://doi.org/10.1145/2842603Bernd Finkbeiner. 2017. Temporal Hyperproperties.
Bulletin of the EATCS
123 (2017).Bernd Finkbeiner and Christopher Hahn. 2016. Deciding Hyperproperties. In
CONCUR 2016 . 13:1–13:14.https://doi.org/10.4230/LIPIcs.CONCUR.2016.13Bernd Finkbeiner, Christopher Hahn, Philip Lukert, Marvin Stenger, and Leander Tentrup. 2020. Synthesis from hyper-properties.
Acta Informatica
57, 1-2 (2020), 137–163. https://doi.org/10.1007/s00236-019-00358-2Bernd Finkbeiner, Christopher Hahn, Marvin Stenger, and Leander Tentrup. 2019. Monitoring hyperproperties.
FormalMethods Syst. Des.
54, 3 (2019), 336–363. https://doi.org/10.1007/s10703-019-00334-z :29
Bernd Finkbeiner, Markus N. Rabe, and César Sánchez. 2015. Algorithms for Model Checking HyperLTL and HyperCTL ∗ .In CAV 2015 . 30–48. https://doi.org/10.1007/978-3-319-21690-4_3Olivier Finkel. 2006. On the Accepting Power of 2-Tape Büchi Automata. In
STACS 2006, 23rd Annual Sym-posium on Theoretical Aspects of Computer Science, Marseille, France, February 23-25, 2006, Proceedings . 301–312.https://doi.org/10.1007/11672142_24Olivier Finkel. 2016. Infinite games specified by 2-tape automata.
Ann. Pure Appl. Logic
Inf. Process. Lett.
CoRR abs/1206.4860 (2014). arXiv:1206.4860v5 http://arxiv.org/abs/1206.4860v5Pierre Ganty and Rupak Majumdar. 2012. Algorithmic verification of asynchronous programs.
ACM Trans. Program. Lang.Syst.
34, 1 (2012), 6:1–6:48. https://doi.org/10.1145/2160910.2160915Pierre Ganty, Rupak Majumdar, and Andrey Rybalchenko. 2009. Verifying liveness for asynchronous programs.In
Proceedings of the 36th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL2009, Savannah, GA, USA, January 21-23, 2009 , Zhong Shao and Benjamin C. Pierce (Eds.). ACM, 102–113.https://doi.org/10.1145/1480881.1480895Dainis Geidmanis. 1987. On the Capabilities of Alternating and Nondeterministic Multitape Automata. In
Fundamen-tals of Computation Theory, International Conference FCT’87, Kazan, USSR, June 22-26, 1987, Proceedings . 150–154.https://doi.org/10.1007/3-540-18740-5_35Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem. 2020. Propositional Dynamic Logic for Hyperproperties.In , Igor Konnov and Laura Kovács (Eds.), Vol. 171. Schloss Dagstuhl - Leibniz-Zentrum für Informatik,50:1–50:22. https://doi.org/10.4230/LIPIcs.CONCUR.2020.50Oscar H. Ibarra and Nicholas Q. Trân. 2013. How to synchronize the Heads of a Multitape Automaton.
Int. J. Found. Comput.Sci.
24, 6 (2013), 799–814. https://doi.org/10.1142/S0129054113400194Hartley Rogers Jr. 1987.
Theory of recursive functions and effective computability (Reprint from 1967) . MIT Press.http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3182Andreas Krebs, Arne Meier, Jonni Virtema, and Martin Zimmermann. 2017. Team Semantics for the Specification andVerification of Hyperproperties.
CoRR abs/1709.08510 (2017).Martin Lange. 2005. Weak Automata for the Linear Time 𝜇 -Calculus. In Verification, Model Checking, and Abstract Interpre-tation , Radhia Cousot (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 267–281.Martin Lange. 2015. The Arity Hierarchy in the Polyadic 𝜇 -Calculus. In FICS (EPTCS) , Vol. 191. 105–116.Isabella Mastroeni and Michele Pasqua. 2017. Hyperhierarchy of Semantics - A Formal Framework for Hyperproper-ties Verification. In
Static Analysis - 24th International Symposium, SAS 2017, New York, NY, USA, August 30 - Septem-ber 1, 2017, Proceedings (Lecture Notes in Computer Science) , Francesco Ranzato (Ed.), Vol. 10422. Springer, 232–252.https://doi.org/10.1007/978-3-319-66706-5_12Isabella Mastroeni and Michele Pasqua. 2018. Verifying Bounded Subset-Closed Hyperproperties. In
Static Analysis - 25thInternational Symposium, SAS 2018, Freiburg, Germany, August 29-31, 2018, Proceedings (Lecture Notes in Computer Sci-ence) , Andreas Podelski (Ed.), Vol. 11002. Springer, 263–283. https://doi.org/10.1007/978-3-319-99725-4_17Dimiter Milushev and Dave Clarke. 2013. Incremental Hyperproperty Model Checking via Games. In
Proceedings of the18th Nordic Conference on Secure IT Systems - Volume 8208 (Ilulissat, Greenland) (NordSec 2013) . Springer-Verlag NewYork, Inc., New York, NY, USA, 247–262. https://doi.org/10.1007/978-3-642-41488-6_17Anca Muscholl. 1996. On the Complementation of Asynchronous Cellular Büchi Automata.
Theor. Comput. Sci. 𝜇 -calculus. Theor. Comput. Sci.
Protocol Specification, Testing and Verification XV: Proceedings of the Fifteenth IFIP WG6.1 International Sym-posium on Protocol Specification, Testing and Verification, Warsaw, Poland, June 1995 . Springer US, Boston, MA, 315–330.Shaz Qadeer. 2008. The Case for Context-Bounded Verification of Concurrent Programs. In
Model Checking Soft-ware, 15th International SPIN Workshop, Los Angeles, CA, USA, August 10-12, 2008, Proceedings (Lecture Notesin Computer Science) , Klaus Havelund, Rupak Majumdar, and Jens Palsberg (Eds.), Vol. 5156. Springer, 3–6.https://doi.org/10.1007/978-3-540-85114-1_2Shaz Qadeer and Jakob Rehof. 2005. Context-Bounded Model Checking of Concurrent Software. In
Tools and Algo-rithms for the Construction and Analysis of Systems, 11th International Conference, TACAS 2005, Held as Part of the :30 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem
Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8, 2005, Proceed-ings (Lecture Notes in Computer Science) , Nicolas Halbwachs and Lenore D. Zuck (Eds.), Vol. 3440. Springer, 93–107.https://doi.org/10.1007/978-3-540-31980-1_7Markus N. Rabe. 2016.
A temporal logic approach to Information-flow control . Ph.D. Dissertation. Saarland University.Michael O. Rabin and Dana S. Scott. 1959. Finite Automata and Their Decision Problems.
IBM Journal of Research andDevelopment
3, 2 (1959), 114–125. https://doi.org/10.1147/rd.32.0114Aravinda Prasad Sistla. 1983.
Theoretical Issues in the Design and Verification of Distributed Systems . Ph.D. Dissertation.Carnegie-Mellon University, USA.Alex Spelten, Wolfgang Thomas, and Sarah Winter. 2011. Trees over Infinite Structures and Path Logics with Synchroniza-tion. In
Proceedings 13th International Workshop on Verification of Infinite-State Systems, INFINITY 2011, Taipei, Taiwan,10th October 2011 (EPTCS) , Fang Yu and Chao Wang (Eds.), Vol. 73. 20–34. https://doi.org/10.4204/EPTCS.73.5Larry Joseph Stockmeyer. 1974.
The complexity of decision problems in automata theory and logic . Ph.D. Dissertation. MIT.Alfred Tarski. 1955. A lattice-theoretical fixpoint theorem and its applications.
Pacific J. Math.
5, 2 (1955), 285–309.https://projecteuclid.org:443/euclid.pjm/1103044538Moshe Y. Vardi. 1988. A Temporal Fixpoint Calculus. In
POPL . ACM Press, 250–259.Wieslaw Zielonka. 1987. Notes on Finite Asynchronous Automata.
ITA
21, 2 (1987), 99–135. :31
A MISSING PROOFS FROM SECTION 3A.1 Recursion Theory of AAPA
We briefly outline some elementary notions of recursion theory and the theory of analytic sets.We refer the reader to [Jr. 1987] for a thorough introduction.A 2-tape Büchi automaton is a sextuple T = ( 𝐾, Σ , Σ , Δ , 𝑞 , 𝐹 ) where 𝐾 is a finite set ofstates, Σ , Σ are finite alphabets, Δ is a finite subset of 𝐾 × Σ ∗ × Σ ∗ × 𝐾 , 𝑞 is the initial stateand 𝐹 ⊆ 𝐾 is the set of final states. A computation C of T is an infinite sequence of transitions ( 𝑞 , 𝑢 , 𝑣 , 𝑞 ) ( 𝑞 , 𝑢 , 𝑣 , 𝑞 ) . . . . A computation is accepting if a state 𝑞 ∈ 𝐹 is visited infinitely often.The input word then is 𝑢 = 𝑢 𝑢 . . . and the output word is 𝑣 = 𝑣 𝑣 . . . . The infinitary rationalrelation R (T ) ⊆ Σ 𝜔 × Σ 𝜔 accepted by T is the set of tuples ( 𝑢, 𝑣 ) for which there is an acceptingcomputation of T . A 2-tape Büchi automaton can be considered a NAPA in which transitions areallowed to depend on input words and emit output words instead of single symbols only.Let Σ = Π be the set of formulas of second order arithmetic with no set quantifiers. A formulain the language of second order arithmetic is Σ 𝑛 + if it is logically equivalent to a formula of theform ∃ 𝑋 . . . ∃ 𝑋 𝑛 𝜓 where 𝜓 is Π 𝑛 and Π 𝑛 + if it is logically equivalent to a formula of the form ∀ 𝑋 . . . ∀ 𝑋 𝑛 𝜓 where 𝜓 is Σ 𝑛 . As usual, capital notation for variables indicates that they are secondorder variables. A set of natural numbers is said to be Σ 𝑛 (resp. Π 𝑛 ) if there is a Σ 𝑛 (resp. Π 𝑛 )formula defining it. Given two sets 𝐴, 𝐵 ⊆ N , we say that 𝐴 is 1-reducible to 𝐵 (written 𝐴 ≤ 𝐵 )if there is a total (i), computable (ii) and injective (iii) function 𝑓 : N → N such that 𝐴 = 𝑓 − ( 𝐵 ) (iv). A set 𝐴 ⊆ N of natural numbers is called Σ 𝑛 -hard (resp. Π 𝑛 -hard) if every Σ 𝑛 (resp. Π 𝑛 ) set 𝐵 , 𝐵 ≤ 𝐴 holds. 𝐴 is called Σ 𝑛 -complete (resp. Π 𝑛 -complete) if 𝐴 is Σ 𝑛 -hard (resp. Π 𝑛 -hard) and 𝐴 is a Σ 𝑛 (resp. Π 𝑛 ) set. If 𝐴 is Σ 𝑛 -hard, then 𝐴 is Π 𝑛 -hard and vice versa. We will make use of thefollowing fact: Proposition A.1 ([Finkel and Lecomte 2009]).
For two-tape Büchi automata, the inclusion prob-lem, i.e. the language L = {(T , T ′ ) | R (T ) ⊆ R (T ′ )} is Π -complete. Thus, the complement, L , is Σ -complete. Using this fact, we are now ready to classify the recursion-theoretic strength of AAPA:
Proof of Theorem 3.7.
We reduce from the complement of the inclusion problem for two-tapeBüchi automata T , T ′ . For this purpose, we construct an AAPA A with two tapes. Trivially, T canbe converted to AAPA since the transitions depending on multiple input symbols can be simulatedstepwise. Furthermore, by Theorem 3.2, AAPA are closed under complement. We can thus build anAAPA accepting R (T ) , an AAPA accepting
R (T ′ ) and use conjunctive alternation to enforce thatan input tuple is accepted by both automata, resulting in A . The reduction outlined above is a 1-reduction since it is obviously total (i) and computable(ii), every tuple of two-tape Büchi automatonis assigned a unique AAPA (iii) and L (A) is non-empty iff
R (T ) ∩ R (T ′ ) is non-empty (iv). (cid:3) Finally, the following well-known fact illustrates why Σ -hard problems are highly intractableand not subject to exhaustive approximation analyses: Proposition A.2. No Σ hard problem is arithmetical. In particular, no Σ -hard problem is recur-sively enumerable or co-enumerable. A.2 Proof of Lemma 3.15
Proof.
For the first direction, assume that there is a 𝑘 -context-bounded accepting run 𝑇 of A . We use the accepting run on ( 𝑤 , ..., 𝑤 𝑛 ) to construct accepting runs of S on 𝑤 𝑑 starting in Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } , where 𝑔 is as well constructed from the accepting run. :32 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem As the first step, we construct 𝑔 from the run. We divide the run into maximal connected sectionssuch that each section moves forward in a single direction. Since 𝑇 is 𝑘 -context-bounded, a path init can switch between different sections 𝑘 − 𝑔 is now defined recursively.On the first level, we choose the set of states where 𝑇 enters different sections than the statesentered from the states 𝑞 ∈ 𝑄 belong to. These states 𝑞 are then each combined with a set ofstates where the subtree 𝑇 ′ of 𝑇 starting in 𝑞 enters a different section than 𝑞 belongs to. Repeatthe process until a bottom section is reached after at most 𝑘 − 𝑔 ∈ 𝐺 .Let 𝑑 ∈ { , ..., 𝑛 } be an arbitrary direction. We show that S has an accepting run from Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } on 𝑤 𝑑 . Note that due to the way 𝐹 𝑑 is defined, it yields a set of states { 𝑞 , ..., 𝑞 𝑚 } with corresponding guesses 𝑔 , ..., 𝑔 𝑚 such that 𝑇 enters sections with direction 𝑑 firstexactly in states 𝑞 , ..., 𝑞 𝑚 . Therefore, when reaching some 𝑞 𝑖 , 𝑇 has not moved forward in direction 𝑑 yet, and an accepting run from Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } on 𝑤 𝑑 consists of the conjunctionof accepting runs from 𝑞 𝑖 for all 𝑖 . These runs are given by taking the subtree 𝑇 𝑖 of 𝑇 starting in 𝑞 𝑖 and erasing every section not belonging to direction 𝑑 in it. Whenever a section belonging todirection 𝑑 is disconnected from 𝑇 𝑖 in this way, we reapply it at the point where the connectingsections were erased. After this process is completed, we insert true loops at the end of every finitepath. This way, we indeed have a run of S since erasure of non- 𝑑 sections and reapplication of 𝑑 -sections corresponds to conjunctive moves into 𝐹 𝑑 ( 𝑔 ′′ ) in the definition of 𝜌 𝑆 . Insertion of true loops corresponds to empty sets 𝐹 𝑑 ( 𝑔 ′′ ) and thus empty conjunctions (which are equivalent tomoves to true ) in the definition of 𝜌 . Also, it is indeed an accepting run. Infinite paths in the runthat are constructed from infinite paths in 𝑇 𝑖 are obtained by erasing finite subpaths. Therefore theparity-condition stays fulfilled. Paths ending in true loops are fulfilled by default.For the other direction, assume that there is a 𝑔 ∈ 𝐺 such that S accepts 𝑤 𝑑 from Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } for all 𝑑 ∈ { , ..., 𝑛 } . We use all accepting runs on 𝑤 𝑑 to construct a 𝑘 -context-bounded accepting run of A on ( 𝑤 , ..., 𝑤 𝑛 ) .This works in exactly the opposite way multiple runs were created from a single one in the firstdirection. First we notice, that acceptance from Ó { 𝑔 ′ ∈ 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) | 𝑞 ∈ 𝑄 } is induced by aset of runs from 𝑞 𝑖 for each ( 𝑞 𝑖 , 𝑔 𝑖 ) ∈ Ð 𝑞 ∈ 𝑄 𝐹 𝑑 ({( 𝑞 , 𝑔 )}) . We erase true loops induced by empty 𝐹 𝑑 ( 𝑔 ′′ ) sets. Next, we cut off transitions to 𝐹 𝑑 ( 𝑔 ′′ ) and obtain even more partial accepting runsin S . Given these, we reconnect them according to the guesses we have made in 𝑔 : Whereevera transition to 𝐹 𝑑 ( 𝑔 ′′ ) for some ( 𝑞 ′ , 𝑔 ′′ ) was removed, instead transition to 𝑞 ′ . Additionally, eachlabelling with ( 𝑞, 𝑔 ) is replaced with 𝑞 . Then, we have indeed obtained a run of A , since 𝜌 𝑆 is builtin a way that given a set of pairs ( 𝑞 𝑖 , 𝑔 𝑖 ) in 𝑔 , we have a transition Ó 𝑄 ′ 𝑑 × { 𝑔 } ∧ Ó 𝑖 𝐹 𝛾 ( 𝑞 ) ( 𝑔 𝑖 ) in S iff we have a transition Ó 𝑄 ′ 𝑑 ∧ Ó 𝑖 𝑞 𝑖 in A . It is 𝑘 -context-bounded because guesses in 𝑔 were onlymade 𝑘 − (cid:3) A.3 Reasons for restricting contexts to a single direction
We now elaborate on the remark about 𝑘 -context-bounded AAPA with additional synchronoussteps. We call an AAPA A 𝑘 -sync-context-bounded iff it switches between sync-contexts at most 𝑘 − 𝑡 𝑖 , ..., 𝑡 𝑗 in arun over words 𝑤 , ..., 𝑤 𝑛 a synchronous block if for every direction 𝑑 , we have 𝑐 𝑑𝑗 = 𝑐 𝑑𝑖 +
1, i.e.every direction has been progressed by exactly one step. A sync-context is a (possibly infinite)path 𝑝 = 𝑡 𝑡 ... in a run of an AAPA over 𝑤 , ..., 𝑤 𝑛 such that transitions between successive states :33 all use the same direction and otherwise, all directions are advanced. That means there is a 𝑑 ∈ 𝑀 such that for all 𝑖 ∈ { , ..., | 𝑝 |} , either 𝑐 𝑑𝑖 + = 𝑐 𝑑𝑖 + 𝑝 is a concatenation of synchronousblocks. We call a run 𝑇 of an AAPA 𝑘 -sync-context-bounded, if every path in 𝑇 switches betweendifferent sync-contexts at most 𝑘 − Theorem A.3.
The problem to decide whether there is a 𝑘 -sync-context-bounded accepting run ofan AAPA and thus the emptiness problem for 𝑘 -sync-context-bounded AAPA is undecidable. Proof.
Let M be a deterministic Turing machine. For M , we build a 3-sync-context-boundedAAPA A recognizing an encoding of accepting runs of M with two directions as follows:One part of A checks synchronously whether both directions contain the same input. It canalso check whether the sequences in the directions represent valid configurations separated by amarker, whether the first configuration is the initial configuration and whether an accepting stateis eventually reached.To check whether the sequence of configurations in the directions is constructed in accordancewith the transition function of M , we combine asynchronous with synchronous contexts in a sec-ond part of A : we first progress both directions to the start of a configuration via conjunctivealternation. Then, a context switch is performed to asynchronously advance one of the directionsto the next configuration. After a second context switch, we can iterate through both configura-tions synchronously while checking that they satisfy the transition relation of M . Note that thisapproach of checking the transition relation does not require a size bound on the configurationsof M since any configuration is at most one tape cell larger than its predecessor. It also showsthat only two context switches are sufficient for undecidability in the presence of synchronouscontexts. (cid:3) B MISSING PROOFS FROM SECTION 4B.1 Proof of Theorem 4.4
Proof.
We show the claims by induction over the structure of 𝜓 . Therefore we assume 𝜓 isgiven in positive normal form.Let 𝑋 be an arbitrary predicate, V be an arbitrary predicate valutaion, Π be an arbitrary pathassignment and 𝑘 ∈ N ∪ {∞} . Let 𝜉 ⊑ 𝜉 ′ for 𝜉, 𝜉 ′ : 𝑃𝐴 → 𝐺 𝑛𝑘 in the following cases. Formonotonicity, we show that 𝛼 ( 𝜉 ) ⊑ 𝛼 ( 𝜉 ′ ) in each case. Case 1: 𝜓 = 𝑎 𝜋 𝑖 or 𝜓 = ¬ 𝑎 𝜋 𝑖 (both cases are analogous, only doing the first) 𝛼 ( 𝜉 ) = È 𝑎 𝜋 𝑖 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 | 𝑎 ∈ 𝐿 ( Π ( 𝜋 𝑖 ) ( 𝑗 𝑖 ))} = È 𝑎 𝜋 𝑖 É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 = 𝛼 ( 𝜉 ′ ) Case 2: 𝜓 = 𝑌 :34 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem • Case 2.1: 𝑋 = 𝑌 𝛼 ( 𝜉 ) = È 𝑌 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 = V [ 𝑋 ↦→ 𝜉 ] ( 𝑌 ) = V [ 𝑌 ↦→ 𝜉 ] ( 𝑌 ) = 𝜉 ⊑ 𝜉 ′ = V [ 𝑌 ↦→ 𝜉 ′ ] ( 𝑌 ) = V [ 𝑋 ↦→ 𝜉 ′ ] ( 𝑌 ) = È 𝑌 É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 = 𝛼 ( 𝜉 ′ )• Case 2.2: 𝑋 ≠ 𝑌 𝛼 ( 𝜉 ) = È 𝑌 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 = V [ 𝑋 ↦→ 𝜉 ] ( 𝑌 ) = V [ 𝑋 ↦→ 𝜉 ′ ] ( 𝑌 ) = È 𝑌 É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 = 𝛼 ( 𝜉 ′ ) Case 3: 𝜓 = (cid:13) 𝜋 𝑖 𝜓 ′ • By induction hypothesis, 𝛼 ′ ( 𝜉 ) : = È 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ] 𝑘 is monotone 𝛼 ( 𝜉 ) = È(cid:13) 𝜋 𝑖 𝜓 ′ É V [ 𝑋 ↦→ 𝑀 ] 𝑘 = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 |( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ È 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ] ( Π )} 𝑘 = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 |( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ 𝑔 ′ ( 𝜉 ) ( Π )}⊑ 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 |( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ 𝑔 ′ ( 𝜉 ′ ) ( Π )} = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 |( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ È 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 ( Π )} = È(cid:13) 𝜋 𝑖 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 = 𝛼 ( 𝜉 ′ ) Case 5: 𝜓 = 𝜓 ′ ∧ 𝜓 ′′ • Monotonicity can be shown analogous to case 4.
Case 6: 𝜓 = 𝜇𝑌 .𝜓 ′ or 𝜓 = 𝜈𝑌 .𝜓 ′ • Since we are assuming positive normal form, all bound path predicates are distinct. Therefore 𝑋 ≠ 𝑌 • By induction hypothesis, 𝛼 ′ ( 𝜉 ) : = È 𝜓 ′ É V [ 𝑌 ↦→ 𝜉 ′ ] [ 𝑋 ↦→ 𝜉 ] 𝑘 is monotone for all 𝜉 ′ :35 • Doing the case for a least fixpoint, the other case is analogous 𝛼 ( 𝜉 ) = È 𝜇𝑌 .𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ] 𝑘 = / { 𝜉 ′′ : 𝑃𝐴 → 𝐺 𝑛𝑘 | 𝜉 ′′ ⊒ È 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ] [ 𝑌 ↦→ 𝜉 ′′ ] 𝑘 } = / { 𝜉 ′′ : 𝑃𝐴 → 𝐺 𝑛𝑘 | 𝜉 ′′ ⊒ È 𝜓 ′ É V [ 𝑌 ↦→ 𝜉 ′′ ] [ 𝑋 ↦→ 𝜉 ] 𝑘 }⊑ / { 𝜉 ′′ : 𝑃𝐴 → 𝐺 𝑛𝑘 | 𝜉 ′′ ⊒ È 𝜓 ′ É V [ 𝑌 ↦→ 𝜉 ′′ ] [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 } = / { 𝜉 ′′ : 𝑃𝐴 → 𝐺 𝑛𝑘 | 𝜉 ′′ ⊒ È 𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ′ ] [ 𝑌 ↦→ 𝜉 ′′ ] 𝑘 } = È 𝜇𝑌 .𝜓 ′ É V [ 𝑋 ↦→ 𝜉 ′ ] 𝑘 = 𝛼 ( 𝜉 ′ ) (cid:3) B.2 Proof of Corollary 4.5
Proof.
Both claims ensue from the fact that ( 𝑃𝐴 → N 𝑛 , ⊑) is a complete lattice, Theorem 4.4and Knaster-Tarski’s fixpoint theorem. (cid:3) B.3 Proof of Theorem 4.6
Proof.
Mostly straightforward structural induction on 𝜓 using the fact that 𝐺 𝑘 ⊆ 𝐺 𝑘 ′ for 𝑘 ≤ 𝑘 ′ . In the fixpoint case 𝜓 = 𝜇𝑋 .𝜓 , we need to establish that 𝛽 𝑘 ( 𝜉 ) = È 𝜓 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 ⊑ 𝛽 𝑘 ′ ( 𝜉 ) = È 𝜓 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 ′ and therefore 𝑙 𝑓 𝑝 ( 𝛽 𝑘 ) ⊑ 𝑙 𝑓 𝑝 ( 𝛽 𝑘 ′ ) . (cid:3) B.4 Proof of Corollary 4.7
Proof.
We show that Π | = K 𝑘 𝜑 implies Π | = Π 𝑘 ′ 𝜑 . The claim then follows immediately. Fix aKripke Structure K , a formula 𝜑 and some 𝑘, 𝑘 ′ with 𝑘 ≤ 𝑘 ′ .For an existential quantifier ∃ 𝜋 .𝜑 , we have to show Π [ 𝜋 ↦→ 𝑝 ] | = K 𝑘 𝜑 for some 𝑝 ∈ 𝑃𝑎𝑡ℎ𝑠 (K) implies Π [ 𝜋 ↦→ 𝑝 ′ ] | = K 𝑘 ′ 𝜑 for some 𝑝 ′ ∈ 𝑃𝑎𝑡ℎ𝑠 (K) . Indeed, for 𝑝 = 𝑝 ′ , the claim follows from theinduction hypothesis.The case for a universal quantifier is analogous.For a quantifier-free formula 𝜓 , we have to show that ( , ..., ) ∈ È 𝜓 É V 𝑘 ( Π ) for some V implies ( , ..., ) ∈ È 𝜓 É V ′ 𝑘 ′ ( Π ) for some V ′ . Indeed, by Theorem 4.6, the claim holds for V ′ = V . (cid:3) B.5 Formal semantics on traces
Let T be a set of traces. We call a function Π : 𝑁 → T a trace assignment and denote by TA theset of all trace assignments. Then we use V : 𝜒 → TA → N 𝑛 to denote a predicate valuation.Manipulations on these functions are defined as for path assignments.We again differentiate between semantics for the two types of formulas: quantifier semanticsand trace semantics. For a quantified formula 𝜑 , we write T | = 𝑘 𝜑 to denote that the set of traces T fulfills the formula 𝜑 , i.e. iff {} | = T 𝑘 𝜑 for the empty trace assignment {} . For a quantifier-freeformula 𝜓 , we instead consider a semantics similar to the path semantics from Definition 4.3, withthe difference that we consider trace assignments instead of path assignments. :36 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem Definition B.1 (Quantifier semantics). Π | = T 𝑘 ∃ 𝜋 .𝜑 iff Π [ 𝜋 ↦→ 𝑡 ] | = T 𝑘 𝜑 for some 𝑡 ∈ T Π | = T 𝑘 ∀ 𝜋 .𝜑 iff Π [ 𝜋 ↦→ 𝑡 ] | = T 𝑘 𝜑 for all 𝑡 ∈ T Π | = T 𝑘 𝜓 iff ( , ..., ) ∈ È 𝜓 É V 𝑘 ( Π ) for some V for a quantified formula 𝜑 and a quantifier-free formula 𝜓 . Definition B.2 (Trace Semantics). È 𝑎 𝜋 𝑖 É V 𝑘 : = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 | 𝑎 ∈ Π ( 𝜋 𝑖 ) ( 𝑗 𝑖 )}È 𝑋 É V 𝑘 : = V ( 𝑋 )È(cid:13) 𝜋 𝑖 𝜑 É V 𝑘 : = 𝜆 Π . {( 𝑗 , ..., 𝑗 𝑛 ) ∈ 𝐺 𝑛𝑘 |( 𝑗 , ..., 𝑗 𝑖 + , ..., 𝑗 𝑛 ) ∈ È 𝜑 É V 𝑘 ( Π )}È 𝜑 ∨ 𝜑 ′ É V 𝑘 : = 𝜆 Π . È 𝜑 É V 𝑘 ( Π ) ∪ È 𝜑 ′ É V 𝑘 ( Π )Ȭ 𝜑 É V 𝑘 : = 𝜆 Π .𝐺 𝑛𝑘 \ È 𝜑 É V 𝑘 ( Π )È 𝜇𝑋 .𝜑 É V 𝑘 : = / { 𝜉 : 𝑇 𝐴 → 𝐺 𝑛𝑘 | 𝜉 ⊒ È 𝜑 É V [ 𝑋 ↦→ 𝜉 ] 𝑘 } C MISSING PROOFS FROM SECTION 5C.1 Proof of claim about well-formed valuations
Theorem C.1. If V is a well-formed valuation, then È 𝜓 É V 𝑘 ′ is well-formed, i.e. for all vectors 𝑣, 𝑣 ′ and path assignments Π , Π ′ with Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] it holds: 𝑣 ∈ È 𝜓 É V 𝑘 ′ ( Π ) iff 𝑣 ′ ∈ È 𝜓 É V 𝑘 ′ ( Π ′ ) . Proof.
We show the claim by a structural induction on 𝜓 . Let 𝑣, 𝑣 ′ therefore be arbitrary vectorsand Π , Π ′ be arbitrary path assignments with Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] in the following cases. Atomic propositions: this case follows immediately from the assumption that Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] . Predicates: this case follows immediately from the assumption that V is well-formed. Next: this case follows from the assumption that Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] and the induction hypothesis. Boolean connecitves: this case follows immediately from the assumption that Π [ 𝑣 ] = Π ′ [ 𝑣 ′ ] and the induction hypothesis. Fixpoints: we use the approximant characterisation à 𝜅 ≥ 𝛼 𝜅 (⊥) for 𝜓 and show the claim by atransfinite induction on 𝜅 . We thus use ( 𝐼𝐻 ) for the structural induction’s hypothesis and ( 𝐼𝐻 ) for the transfinite induction’s hypothesis.In the base case of the transfinite induction, 𝜅 =
0, we trivially have 𝑣 ∈ 𝛼 (⊥) ( Π ) = ∅ iff 𝑣 ′ ∈ 𝛼 (⊥) ( Π ′ ) = ∅ .In the inductive step of the transfinite induction, 𝜅 ↦→ 𝜅 +
1, we have 𝑣 ∈ 𝛼 𝜅 + (⊥) ( Π ) = È 𝜓 É V [ 𝑋 ↦→ 𝛼 𝑘 (⊥) ] 𝑘 ( Π ) and show that 𝑣 ′ ∈ 𝛼 𝜅 + (⊥) ( Π ′ ) = È 𝜓 É V [ 𝑋 ↦→ 𝛼 𝑘 (⊥) ] 𝑘 ( Π ′ ) . Therefore noticethat ( 𝐼𝐻 ) implies that V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] is a well-formed valuation. Then the claim follows imme-diately from ( 𝐼𝐻 ) .In the limit case of the transfinite induction, 𝜅 < 𝜆 ↦→ 𝜆 , let 𝑣 ∈ 𝛼 𝜆 (⊥) ( Π ) = Ð 𝜅 < 𝜆 𝛼 𝜅 (⊥) ( Π ) .Therefore there is a 𝜅 < 𝜆 such that 𝑣 ∈ 𝛼 𝜅 (⊥) ( Π ) . The induction hypothesis ( 𝐼𝐻 ) then impliesthat 𝑣 ′ ∈ 𝛼 𝜅 (⊥) ( Π ′ ) and thus 𝑣 ′ ∈ 𝛼 𝜆 (⊥) ( Π ′ ) . (cid:3) If we now consider starting from a trivially well-formed valuation like 𝜆𝑋 . ⊥ , then we can seefrom the transfinite induction in the fixpoint case of the proof, that only well-formed valuationsoccur in fixpoint iterations of 𝐻 𝜇 . :37 C.2 Proof of Theorem 5.2 part 1
In this section, we prove that the construction presented in subsection 5.1 indeed results in anautomaton that is K -equivalent to 𝜓 . Proof.
By structural induction over the structure of 𝜓 . Let 𝑣 = ( 𝑣 , ..., 𝑣 𝑛 ) ∈ N 𝑛 , let Π with Π ( 𝜋 𝑖 ) = 𝑠 𝑖 𝑠 𝑖 ... be an arbitrary path assignment and let V an arbitrary predicate valuation in thefollowing cases. Atomic propositions:
Doing the case for 𝑎 𝜋 𝑖 , the negated case is analogous. It is easy to seethat L (A 𝑎 𝜋𝑖 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V( 𝑋 𝑚 ) ( Π ))]) = L (A 𝑎 𝜋𝑖 ) .Let 𝑣 ∈ È 𝑎 𝜋 𝑖 É V ( Π ) . By the definition of semantics we have 𝑎 ∈ 𝐿 ( Π ( 𝜋 𝑖 ) ( 𝑣 𝑖 )) , thus for the firstsymbol 𝑠 𝑣 𝑖 𝑖 of 𝑤 Π [ 𝑣 ] in direction 𝑖 we have 𝑎 ∈ 𝐿 ( 𝑠 𝑣 𝑖 𝑖 ) , which induces an accepting run of A 𝑎 𝜋𝑖 .Therefore we have 𝑤 Π [ 𝑣 ] ∈ L (A 𝑎 𝜋𝑖 ) .Let 𝑤 Π [ 𝑣 ] ∈ L (A 𝑎 𝜋𝑖 ) . By construction, the accepting run of A 𝑎 𝜋𝑖 on 𝑤 Π [ 𝑣 ] has to move to ( 𝑡𝑡 ) with the first symbol read in direction 𝑖 , which implies 𝑎 ∈ 𝐿 ( 𝑠 𝑣 𝑖 𝑖 ) for the first symbol 𝑠 𝑣 𝑖 𝑖 of 𝑤 Π [ 𝑣 ] in direction 𝑖 . By the definition of 𝑤 Π we then have 𝑎 ∈ 𝐿 ( Π ( 𝜋 𝑖 ) ( 𝑣 𝑖 )) , which by the definition ofsemantics directly implies 𝑣 ∈ È 𝑎 𝜋 𝑖 É V ( Π ) . Predicates:
We have
L (A 𝑋 𝑖 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) = L (A 𝑋 𝑖 [ 𝑋 𝑖 : L (V ( 𝑋 𝑖 ) ( Π ))]) = L (V ( 𝑋 𝑖 ) ( Π )) since A 𝑋 𝑖 only consists of the hole 𝑋 𝑖 .Let 𝑣 ∈ È 𝑋 𝑖 É V ( Π ) . By the definition of semantics, we have 𝑣 ∈ V ( 𝑋 𝑖 ) ( Π ) , which directlyimplies 𝑤 Π [ 𝑣 ] ∈ L (V ( 𝑋 𝑖 ) ( Π )) .Let 𝑤 Π [ 𝑣 ] ∈ L (V ( 𝑋 𝑖 ) ( Π )) . By the definition of 𝑤 Π , this implies 𝑣 ∈ V ( 𝑋 𝑖 ) ( Π ) , which in turnimplies 𝑣 ∈ È 𝑋 𝑖 É V ( Π ) . Boolean connectives:
Doing the case for disjunction, other case is analogous. We divide thefree variables 𝑋 , ..., 𝑋 𝑚 into two sets 𝑌 , ..., 𝑌 𝑚 and 𝑍 , ..., 𝑍 𝑚 (which may be non-disjoint) suchthat 𝑌 , ..., 𝑌 𝑚 are the free variables in 𝜓 and 𝑍 , ..., 𝑍 𝑚 are the free variables in 𝜓 . By con-struction, we have L (A 𝜓 ∨ 𝜓 ) [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))] = L (A 𝜓 ) [ 𝑌 : L (V ( 𝑌 ) ( Π )) , ..., 𝑌 𝑚 : L (V ( 𝑌 𝑚 ) ( Π ))] ∪ L (A 𝜓 ) [ 𝑍 : L (V ( 𝑍 ) ( Π )) , ..., 𝑍 𝑚 : L (V ( 𝑍 𝑚 ) ( Π ))] .Then both directions of the claim ensue from the induction hypothesis. Next:
Handling a formula of the form (cid:13) 𝜋 𝑖 𝜓 . Note that 𝑋 , ..., 𝑋 𝑚 are exactly the free vari-ables of 𝜓 as well. By construction we have 𝑤 [ 𝑣 ] ∈ L (A (cid:13) 𝜋𝑖 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) iff 𝑤 [ 𝑣 + 𝑒 𝑖 ] ∈ L (A 𝜓 [ 𝑋 : L (V( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) for allwords 𝑤 and the unit vector 𝑒 𝑖 with 1 in component 𝑖 . Then both directions of the claim ensuedirectly from the induction hypothesis. Fixpoint expressions:
Doing the case for a 𝜇 formula 𝜇𝑋 .𝜓 , the other case is analogous.Since we assume that every path predicate is bound by a unique fixpoint expression, we have 𝑋 ∈ 𝑓 𝑟𝑒𝑒 ( 𝜓 ) and 𝑋 ∉ 𝑓 𝑟𝑒𝑒 ( 𝜇𝑋 .𝜓 ) . Note that the priority of state ( 𝑋 ) is an odd strict lowerbound on the priorities in A 𝜇𝑋 .𝜓 . Thus, every path of an accepting run of A 𝜇𝑋 .𝜓 can only visit ( 𝑋 ) finitely often. Also, by construction, any path of a run visiting ( 𝑋 ) at some point must then proceedfrom the start of the automaton in the next step. Therefore, L (A 𝜇𝑋 .𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) can be characterised as the least fixpoint of the function 𝑓 : L ↦→ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : L]) , or by a union of its approximants Ð 𝜅 ≥ 𝑓 𝜅 (∅) where 𝑓 (L) = L , 𝑓 𝜅 + (L) = 𝑓 ( 𝑓 𝜅 (L)) and 𝑓 𝜆 (L) = Ð 𝜅 < 𝜆 𝑓 𝜅 (L) for ordinals 𝜅 and limit ordi-nals 𝜆 . On the other hand, È 𝜇𝑋 .𝜓 É V is the least fixpoint of the function 𝛼 : 𝜉 ↦→ È 𝜓 É V [ 𝑋 ↦→ 𝜉 ] and can be characterised as a union of its approximants à 𝜅 ≥ 𝛼 𝜅 (⊥) where 𝛼 ( 𝜉 ) = 𝜉 , 𝛼 𝜅 + ( 𝜉 ) = 𝛼 ( 𝛼 𝜅 ( 𝜉 )) and 𝛼 𝜆 ( 𝜉 ) = à 𝜅 < 𝜆 𝛼 𝜅 ( 𝜉 ) by Corollary 4.5. We now show that 𝑣 ∈ 𝛼 𝜅 (⊥) ( Π ) iff 𝑤 Π [ 𝑣 ] ∈ 𝑓 𝜅 (∅) (*) for all ordinals 𝜅 ≥
1, which establishes the theorem for this case. Indeed, for arbitrary 𝜅 , this ensues directly from the induction hypothesis and the following claim: :38 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem Claim: 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 𝑚 ) ( Π ))]) iff 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 (∅)]) .We prove the claim by a transfinite induction on 𝜅 . To avoid confusion, we will denote thestructural induction’s hypothesis by ( 𝐼𝐻 ) and this induction’s hypothesis by ( 𝐼𝐻 ) . Also, since(*) for some 𝜅 follows directly from the claim for the same 𝜅 , we can use this as ( 𝐼𝐻 ) .For the base case 𝜅 =
0, let 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ ⊥] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→⊥] ( 𝑋 𝑚 ) ( Π ))]) . Since V [ 𝑋 ↦→ ⊥] ( 𝑋 ) ( Π ) = ∅ , we can exchange L (V [ 𝑋 ↦→ ⊥] ( 𝑋 ) ( Π )) with ∅ inthe substitutions. Furthermore, since L (V [ 𝑋 ↦→ ⊥] ( 𝑋 𝑖 ) ( Π )) does not depend on V [ 𝑋 ↦→ ∅] ( 𝑋 ) for 𝑋 𝑖 ≠ 𝑋 , we can exchange the predicate valuations without changing language containment of 𝑤 Π [ 𝑣 ] .On the other hand let 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : ∅]) .With the same arguments as before, we can exchange V with V [ 𝑋 ↦→ ⊥] and ∅ with L (V [ 𝑋 ↦→⊥] ( 𝑋 ) ( Π )) in the substitutions without changing language containment of 𝑤 Π [ 𝑣 ] , immediatelygiving us the desired result.In the inductive step 𝜅 ↦→ 𝜅 +
1, let 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 𝑚 ) ( Π ))]) . We show that the accepting run is also an accepting run of A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 + (∅)] on 𝑤 Π [ 𝑣 ] . First notice that the runbeing an accepting run only depends on V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) in states ( 𝑋 ) , thus replacing thepredicate valuation with V will result in the same behaviour up to states ( 𝑋 ) . Secondly, notice thatfor all 𝑣 ′ ∈ N 𝑛 , when state ( 𝑋 ) is reached with directions according to 𝑣 ′ , then 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝑓 𝜅 + (∅) .Therefore let 𝑣 ′ be an arbitrary vector such that ( 𝑋 ) is reached with directions according to 𝑣 ′ . Bydefinition of L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) ( Π )) , we then have 𝑣 + 𝑣 ′ ∈ 𝛼 𝜅 + (⊥) ( Π ) = 𝛼 ( 𝛼 𝜅 (⊥)) ( Π ) = È 𝜓 É V [ 𝑋 ↦→ 𝛼 𝜅 (⊥) ] ( Π ) . Using ( 𝐼𝐻 ) , we get that 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 𝑚 ) ( Π ))]) . Now ( 𝐼𝐻 ) applies and we have 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 (∅)]) = 𝑓 𝜅 + (∅) . Thus, we have an acceptingrun of A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 + (∅)] on 𝑤 Π [ 𝑣 ] , which implies 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 + (∅)]) .On the other hand assume that 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 + (∅)]) . Again, we show that the accepting run is also an accepting run of A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 𝑚 ) ( Π ))] on 𝑤 Π [ 𝑣 ] . We notice, that replacing V with V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] will result in the same behaviour up to states ( 𝑋 ) . Next we notice, thatfor all vectors 𝑣 ′ ∈ N 𝑛 , when state ( 𝑋 ) is reached according to directions in 𝑣 ′ in the acceptingrun, then 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) ( Π )) . Therefore let 𝑣 ′ be an arbitrary vector suchthat ( 𝑋 ) is reached according to directions in 𝑣 ′ . Due to the substitutions, this implies 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝑓 𝜅 + (∅) = L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜅 (∅)]) . Now ( 𝐼𝐻 ) applies and we have 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 (⊥)] ( 𝑋 𝑚 ) ( Π ))]) . Using ( 𝐼𝐻 ) , we then get 𝑣 + 𝑣 ′ ∈ È 𝜓 É V [ 𝑋 ↦→ 𝛼 𝜅 (⊥) ] ( Π ) = 𝛼 ( 𝛼 𝜅 (⊥)) ( Π ) = 𝛼 𝜅 + (⊥) ( Π ) and thus 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥) ] ( 𝑋 ) ( Π )) . These two facts imply that therun also witnesses 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜅 + (⊥)] ( 𝑋 𝑚 ) ( Π ))]) .For the limit case 𝜅 < 𝜆 ↦→ 𝜆 , let 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 𝑚 ) ( Π ))]) . Just as in the inductive step, we have to show two claims: (i)replacing V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] with V results in the same behaviour up to states ( 𝑋 ) and (ii) for all 𝑣 ′ ∈ N 𝑛 , if ( 𝑋 ) is reached according to directions 𝑣 ′ , then 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝑓 𝜆 (∅) . While claim (i) istrivial, we have to rely on more arguments for claim (ii). Thus let 𝑣 ′ be an arbitrary vector suchthat ( 𝑋 ) is reached according to directions 𝑣 ′ in the accepting run. By definition of L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 ) ( Π )) , we then have 𝑣 + 𝑣 ′ ∈ 𝛼 𝜆 (⊥) ( Π ) = Ã 𝜅 < 𝜆 𝛼 𝜅 (⊥) ( Π ) = ( 𝜆 Π ′ . Ð 𝜅 < 𝜆 𝛼 𝜅 (⊥) ( Π ′ )) ( Π ) . :39 Thus, there is a 𝜅 such that 𝑣 + 𝑣 ′ ∈ 𝛼 𝜅 (⊥) ( Π ) . Using ( 𝐼𝐻 ) we then have 𝑣 + 𝑣 ′ ∈ 𝑓 𝜅 (∅) implying 𝑣 + 𝑣 ′ ∈ 𝑓 𝜆 (∅) and therefore claim (ii). Combining claims (i) and (ii), we can argue that the acceptingrun of A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 𝑚 ) ( Π ))] on 𝑤 Π [ 𝑣 ] isan accepting run of A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜆 (∅)] on 𝑤 Π [ 𝑣 ] as well.On the other hand let 𝑤 Π [ 𝑣 ] ∈ L (A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜆 (∅)]) .Again, we have to show two claims similar to those in the inductive step: (i) V can be replacedwith V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] in the substitutions resulting in the same behaviour up to state ( 𝑋 ) and(ii) for all vectors 𝑣 ′ ∈ N 𝑛 , if state ( 𝑋 ) is reached according to directions in 𝑣 ′ in the acceptingrun, then 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 ) ( Π )) = 𝛼 𝜆 (⊥) ( Π ) . The first claim is trivial. Forthe second claim, let 𝑣 ′ be a vector such that ( 𝑋 ) is reached according to directions in 𝑣 ′ . Due tothe substitutions, this implies 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝑓 𝜆 (∅) = Ð 𝜅 < 𝜆 𝑓 𝜅 (∅) . Thus, there is some 𝜅 such that 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝑓 𝜅 (∅) . Now ( 𝐼𝐻 ) applies and we have 𝑤 Π [ 𝑣 + 𝑣 ′ ] ∈ 𝛼 𝜅 (⊥) ( Π ) ⊆ ( Ã 𝜅 < 𝜆 𝛼 𝜅 (⊥)) ( Π ) = 𝛼 𝜆 (⊥) ( Π ) , thus claim (ii) holds. Combining the two facts, we see that the accepting run of A 𝜓 [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π )) , 𝑋 : 𝑓 𝜆 (∅)] on 𝑤 Π [ 𝑣 ] is an accepting run of A 𝜓 [ 𝑋 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V [ 𝑋 ↦→ 𝛼 𝜆 (⊥)] ( 𝑋 𝑚 ) ( Π ))] on 𝑤 Π [ 𝑣 ] as well. (cid:3) C.3 Proof of Theorem 5.2 part 2
In this section, we prove that the construction presented in subsection 5.2 indeed results in a for-mula that is K -equivalent to A . Therefore, we inductively construct intermediate automata A ℎ𝑖 capturing exactly the behaviour of 𝜓 ℎ𝑖 . We combine them in an automaton A 𝑖 moving into au-tomata A ℎ𝑖 according to A ’s starting function 𝜌 . The proof of our main result then ensues fromtwo lemmas about A ℎ𝑖 and A 𝑖 . Construction of A ℎ𝑖 : For the construction of A ℎ𝑖 , the indices range from 0 to 𝑛 − 𝑚 and 1 to 𝑛 just like in the construc-tion for 𝜓 ℎ𝑖 . The automaton is given as follows: A ℎ𝑖 = ( 𝑄 ′ ∪ ˆ 𝑄, ˆ 𝑞 ℎ , 𝜌 𝑖 , Ω ′ ) where • ˆ 𝑄 = { ˆ 𝑞 | 𝑞 ∈ 𝑄 } is a copy of 𝑄 • 𝑄 ′ is a copy of 𝑄 where for 𝑗 > 𝑖 , the state 𝑞 𝑗 is substituted by a hole • Ω ′ ( 𝑞 ) = Ω ′ ( ˆ 𝑞 ) = Ω ( 𝑞 ) for all 𝑞 ∈ 𝑄 • 𝜌 𝑖 ( ˆ 𝑞 ℎ , ( 𝑠, 𝑣 ) , 𝑑 ) = 𝜌 ( 𝑞 ℎ , ( 𝑠, 𝑣 ) , 𝑑 )• 𝜌 𝑖 ( 𝑞 ℎ , ( 𝑠, 𝑣 ) , 𝑑 ) = 𝜌 ( 𝑞 ℎ , ( 𝑠, 𝑣 ) , 𝑑 ) for ℎ ≤ 𝑖 Construction of A 𝑖 : The automaton is given as A 𝑖 = ( 𝑄 ′ ∪ ˆ 𝑄, ˆ 𝜌 , 𝜌 𝑖 , Ω ′ ) where • 𝑄 ′ , ˆ 𝑄, 𝜌 𝑖 and Ω ′ are taken from an arbitrary A ℎ𝑖 and • ˆ 𝜌 = 𝜌 [ 𝑞 / ˆ 𝑞 ] ... [ 𝑞 𝑛 / ˆ 𝑞 𝑛 ] Lemma C.2. A ℎ𝑖 is K -equivalent to 𝜓 ℎ𝑖 . Proof.
The proof proceeds by an induction on 𝑖 . Base case: 𝑖 =
0: For ℎ > 𝑛 − 𝑚 , we have 𝜓 ℎ = 𝑋 ℎ and A ℎ has (a copy of) the hole 𝑋 ℎ asits starting state. Then 𝑣 ∈ È 𝑋 ℎ É V ( Π ) iff 𝑣 ∈ V ( 𝑋 ℎ ) ( Π ) iff 𝑤 Π [ 𝑣 ] ∈ L (V( 𝑋 ℎ ) ( Π )) iff 𝑤 Π [ 𝑣 ] ∈A ℎ [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))] .For ℎ ≤ 𝑛 − 𝑚 , all of the states 𝑞 ℎ are replaced by holes while the states ˆ 𝑞 ℎ inherit their transitionsfrom 𝑞 ℎ in A . Thus 𝑤 Π [ 𝑣 ] ∈ L (A ℎ [ 𝑋 : L (V ( 𝑋 ) ( Π )) , ..., 𝑋 𝑚 : L (V ( 𝑋 𝑚 ) ( Π ))]) implies thatthere is a combination of holes { 𝑌 , ..., 𝑌 𝑙 } ⊆ { 𝑋 , ..., 𝑋 𝑚 } making 𝜌 ( 𝑞 ℎ , 𝑤 Π ( 𝑣 )| 𝑑 , 𝑑 ) true for some 𝑑 such that 𝑤 Π [ 𝑣 + 𝑒 𝑑 ] ∈ L (V ( 𝑌 𝑗 ) ( Π )) and therefore 𝑣 + 𝑒 𝑑 ∈ V ( 𝑌 𝑗 ) ( Π ) for all 1 ≤ 𝑗 ≤ 𝑙 . Onecan easily see that for this 𝑑 and 𝜎 = 𝑤 Π ( 𝑣 )| 𝑑 , we then have 𝑣 ∈ È 𝜎 𝜋 𝑑 ∧ (cid:13) 𝜋 𝑑 ˆ 𝜌 ( 𝑞 ℎ , 𝜎, 𝑑 )É V ( Π ) andthus 𝑣 ∈ È 𝜓 ℎ É V ( Π ) . The other direction works in a similar way when additionally noticing that :40 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem by the definition of 𝑤 Π , the 𝜎 making the disjunction in 𝜓 ℎ true has to match 𝑣 in direction 𝑑 andcan therefore only be 𝑤 Π ( 𝑣 )| 𝑑 . Inductive step: 𝑖 ↦→ 𝑖 +
1: We first show the claim for ℎ = 𝑖 + ℎ ≠ 𝑖 +
1. Also, we consider the case where 𝜓 𝑖 + 𝑖 + = 𝜇𝑋 𝑖 + .𝜓 𝑖 + 𝑖 and 𝑞 𝑖 + has an odd priority sincethe other case is analogous. Since all states 𝑞 𝑗 with 𝑗 > 𝑖 + Ω ( 𝑞 𝑖 + ) is the lowest priority in all automata A ℎ𝑖 + . In this case, where Ω ( 𝑞 𝑖 + ) is odd, any path in anaccepting run of A ℎ𝑖 + on some word 𝑤 can only visit 𝑞 𝑖 + a finite number of times. Since ˆ 𝑞 𝑖 + in A ℎ𝑖 has the same transitions as 𝑞 𝑖 + in A ℎ𝑖 + up to the hole substitution for 𝑞 𝑖 + , L (A 𝑖 + 𝑖 + [ 𝑋 𝑖 + : L (V ( 𝑋 𝑖 + ) ( Π ) , ..., 𝑋 𝑛 : L (V ( 𝑋 𝑛 ) ( Π )))]) can be characterised as the least fixpoint of the function 𝑓 : L ↦→ A 𝑖 + 𝑖 [ 𝑋 𝑖 + : L , 𝑋 𝑖 + : L (V ( 𝑋 𝑖 + ) ( Π ) , ..., 𝑋 𝑛 : L (V ( 𝑋 𝑛 ) ( Π )))] or as a union of itsapproximants Ð 𝜅 ≥ 𝑓 𝜅 (∅) . On the other hand semantics of 𝜓 𝑖 + 𝑖 + can be characterised as a union ofits approximants à 𝜅 ≥ 𝛼 𝜅 (⊥) using Corollary 4.5.We show that 𝑤 Π [ 𝑣 ] ∈ 𝑓 𝑘 (∅) iff 𝑣 ∈ 𝛼 𝑘 (⊥) ( Π ) for all 𝜅 . Indeed, for arbitrary 𝜅 ≥
1, this followsimmediately from the definition of 𝑓 and 𝛼 , the induction hypothesis, and the following claim: Claim: 𝑤 Π [ 𝑣 ] ∈ L (A 𝑖 + 𝑖 [ 𝑋 𝑖 + : 𝑓 𝜅 − (∅) , 𝑋 𝑖 + : L (V ( 𝑋 𝑖 + ) ( Π )) , ..., 𝑋 𝑛 : L (V ( 𝑋 𝑛 ) ( Π ))]) holdsiff we have 𝑤 Π [ 𝑣 ] ∈ L (A 𝑖 + 𝑖 [ 𝑋 𝑖 + : L (V [ 𝑋 𝑖 + ↦→ 𝛼 𝜅 − (⊥)] ( 𝑋 𝑖 + ) ( Π )) , ..., 𝑋 𝑛 : L (V [ 𝑋 𝑖 + ↦→ 𝛼 𝜅 − (⊥)] ( 𝑋 𝑛 ) ( Π ))]) .This can be shown by a transfinite induction on 𝜅 , similar to the one in the proof of Theorem 5.2part 1.Now we consider the case where ℎ ≠ 𝑖 + 𝑣 ∈ È 𝜓 ℎ𝑖 + É V ( Π ) = È 𝜓 ℎ𝑖 [ 𝑋 𝑖 + / 𝜓 𝑖 + 𝑖 + ]É V ( Π ) . Dueto the way the substitution was done, we have a set 𝑉 of vectors 𝑣 ′ ∈ È 𝜓 𝑖 + 𝑖 + É V ( Π ) such that all 𝑣 ′ ∈ 𝑉 combined witness 𝑣 ∈ È 𝜓 ℎ𝑖 + É V ( Π ) . Considering some V ′ with 𝑉 ⊆ V ′ ( 𝑋 𝑖 + ) ( Π ) , we then have 𝑣 ∈ È 𝜓 ℎ𝑖 É V ′ ( Π ) , in particular 𝑣 ∈ È 𝜓 ℎ𝑖 É V [ 𝑋 𝑖 + ↦→È 𝜓 𝑖 + 𝑖 + É V ] ( Π ) . Using the induction hypothesis, weget 𝑤 Π [ 𝑣 ] ∈ L (A ℎ𝑖 [ 𝑋 𝑖 + : V [ 𝑋 𝑖 + ↦→ È 𝜓 𝑖 + 𝑖 + É V ] ( 𝑋 𝑖 + ) ( Π ) , ..., 𝑋 𝑛 : V [ 𝑋 𝑖 + ↦→ È 𝜓 𝑖 + 𝑖 + É V ] ( 𝑋 𝑛 ) ( Π )]) .For 𝑋 𝑗 ≠ 𝑋 𝑖 + we can remove the substitution in V without changing language containment. Nowall that is left to show is that we can remove the substitution for hole 𝑋 𝑖 + and go over to A ℎ𝑖 + without changing language containment as well. This is done by using the claim for ℎ = 𝑖 + 𝜓 𝑖 + 𝑖 + ’s semantics with the language of A 𝑖 + 𝑖 + (withappropriate substitutions) and then noticing that A ℎ𝑖 + is actually the same automaton as A 𝑖 + 𝑖 + with a different starting state. Thus, instead of moving to hole 𝑋 𝑖 + using the substitution of A ℎ𝑖 + ,we can instead move to 𝑞 𝑖 + and obtain the same behaviour. Since this is exactly the differencebetween A ℎ𝑖 and A ℎ𝑖 + , we obtain our result.On the other hand let 𝑤 Π [ 𝑣 ] ∈ L (A ℎ𝑖 + [ 𝑋 𝑖 + : L (V ( 𝑋 𝑖 + ) ( Π )) , ..., 𝑋 𝑛 : L (V ( 𝑋 𝑛 ) ( Π ))]) . Justlike in the proof of the other direction, we notice that A ℎ𝑖 + is the same automaton as A 𝑖 + 𝑖 + witha different starting state. Thus, whenever a path in the accepting run moves into 𝑞 𝑖 + , we caninstead move to a hole 𝑋 𝑖 + substituted with the language of A 𝑖 + 𝑖 + (with appropriate substitu-tions) and obtain the same behaviour having gone over to A ℎ𝑖 . Using the claim for ℎ = 𝑖 + 𝑋 𝑖 + with È 𝜓 𝑖 + 𝑖 + É V ( Π ) . Since for 𝑋 𝑗 ≠ 𝑋 𝑖 + ,a modification of V on 𝑋 𝑖 + does not change behaviour, we can condense all substitutions into apredicate environment V [ 𝑋 𝑖 + ↦→ È 𝜓 𝑖 + 𝑖 + É V ] . Thus we obtain 𝑤 Π [ 𝑣 ] ∈ L (A ℎ𝑖 [ 𝑋 𝑖 + : V [ 𝑋 𝑖 + ↦→È 𝜓 𝑖 + 𝑖 + É V ] ( 𝑋 𝑖 + ) ( Π ) , ..., 𝑋 𝑛 : V [ 𝑋 𝑖 + ↦→ È 𝜓 𝑖 + 𝑖 + É V ] ( 𝑋 𝑛 ) ( Π )]) , where we can apply the inductionhypothesis to get 𝑣 ∈ È 𝜓 ℎ𝑖 É V [ 𝑋 𝑖 + ↦→È 𝜓 𝑖 + 𝑖 + É V ] ( Π ) . Instead of substituting 𝜓 𝑖 + 𝑖 + ’s semantics for 𝑋 𝑖 + in the predicate environment V , we can instead substitute 𝜓 𝑖 + 𝑖 + for 𝑋 𝑖 + in the formula withoutchanging behaviour. Therefore we have 𝑣 ∈ È 𝜓 ℎ𝑖 [ 𝑋 𝑖 + / 𝜓 𝑖 + 𝑖 + ]É V ( Π ) = È 𝜓 ℎ𝑖 + É V ( Π ) . (cid:3) :41 Lemma C.3.
L (A [ 𝑋 : A , ..., 𝑋 𝑚 : A 𝑚 ]) = L (A 𝑛 − 𝑚 [ 𝑋 : A , ..., 𝑋 𝑚 : A 𝑚 ]) for all AAPA A , ..., A 𝑚 . Proof.
Since for all 𝑗 > 𝑛 − 𝑚 the state 𝑞 𝑗 is a hole, no states in 𝑄 ′ are further substituted byholes. Therefore, the set of states in A 𝑚 − 𝑛 consists of two copies 𝑄 ′ and ˆ 𝑄 of 𝑄 , where moves fromˆ 𝑄 to 𝑄 ′ are possible, but not the other way around. Since ˆ 𝑄 is left after one transition, any run of A 𝑚 − 𝑛 on some word 𝑤 behaves just like a run of A where the first state is substituted by its copyand vice versa. Thus, the two automata recognise the same language. (cid:3) Proof of Theorem 5.2 part 2.
The sought formula is given as 𝜌 [ 𝑞 / 𝜓 𝑛 − 𝑚 ] ... [ 𝑞 𝑛 / 𝜓 𝑛𝑛 − 𝑚 ] when 𝜌 is the starting function of of A . Our claim immediately follows from Lemma C.2 and Lemma C.3. (cid:3) D MISSING PROOFS FROM SECTION 6D.1 Proof of Theorem 6.1
Proof.
We show that the syntactic restrictions of the formula yield automata only having therespective kinds of runs.For a synchronous formula, (cid:13) 𝜓 constructs are used instead of (cid:13) 𝑖 𝜓 constructs. These are trans-lated to automata where a transition to A 𝜓 is performed only after a symbol from each direction isread. The different nodes reading their respective direction can then be merged into a single nodereading a vector. Also, the construction for atomic propositions can straightforwardly be adaptedto read a vector. Since the transition functions in all other constructions are defined inductively,this translates the AAPA into an APA.For 𝑘 -synchronous and 𝑘 -context-bounded formulas, notice that the construction is performedsuch that each node in a run of A 𝜓 corresponds to a node in the extended syntax tree of 𝜓 (but notnecessarily the other way around due to disjunctions). Thus, a restriction on the extended syntaxtree straightforwardly translates to a restriction on the runs of the corresponding AAPA. (cid:3) D.2 Showing the K -equivalence of quantified formulas to their respective automata Proof.
We show the proof by a structural induction with three cases. Here, we only considerthe case for an innermost quantifier ∃ 𝜋 𝑛 + .𝜓 in depth since the other cases are similar or trivial. Existential Quantifiers:
Doing the case for an ∃ quantifier. We show Π | = K 𝑘 ∃ 𝜋 𝑛 + .𝜑 iff 𝑤 Π ∈A ∃ 𝜋 𝑛 + .𝜑 .Assume Π | = K 𝑘 ∃ 𝜋 𝑛 + .𝜑 holds. By the definition of semantics, this implies that there is a 𝑝 ∈ 𝑃𝑎𝑡ℎ𝑠 (K) such that Π [ 𝜋 𝑛 + ↦→ 𝑝 ] | = K 𝑘 𝜑 . We use the induction hypothesis and obtain that 𝑤 Π [ 𝜋 𝑛 + ↦→ 𝑝 ] ∈ A 𝜑 , i.e. A 𝜑 has an accepting run 𝑞 ′ 𝑞 ′ ... on 𝑤 Π [ 𝜋 𝑛 + ↦→ 𝑝 ] . Since 𝑤 Π [ 𝜋 𝑛 + ↦→ 𝑝 ] has oneadditional 𝑆 -component 𝑠 𝑠 ... compared to 𝑤 Π and this component represents a path of K startingin 𝑠 , we can simulate this additional component by the state space of A 𝜑 and obtain an acceptingrun 𝑞 𝑞 ... = ( 𝑞 ′ , 𝑠 ) ( 𝑞 ′ , 𝑠 ) .... of A ∃ 𝜋𝑛 + .𝜓 on 𝑤 Π .On the other hand let 𝑤 Π ∈ L (A ∃ 𝜋 𝑛 + .𝜑 ) . Therefore we have an accepting run 𝑞 𝑞 ... = ( 𝑞 ′ , 𝑠 ) ( 𝑞 ′ , 𝑠 ) .... of A ∃ 𝜋𝑛 + .𝜑 on 𝑤 Π . Due to the way A ∃ 𝜋 𝑛 + .𝜑 was constructed, the second component 𝑠 𝑠 ... repre-sents a path 𝑝 of K starting in 𝑠 . Additionally, the automaton makes sure that 𝑞 ′ 𝑞 ′ ... is an accept-ing run of A 𝜑 on 𝑤 Π [ 𝜋 𝑛 + ↦→ 𝑝 ] . We use the induction hypothesis to obtain that Π [ 𝜋 𝑛 + ↦→ 𝑝 ] | = K 𝑘 𝜑 .Thus, the existence of 𝑝 witnesses Π | = K 𝑘 ∃ 𝜋 𝑛 + .𝜑 . Universal quantifiers:
Analogous to the previous case.
Innermost formula:
Here we use Theorem 5.2 as a lemma to directly obtain the result fromthe semantics definition. (cid:3) :42 Jens Oliver Gutsfeld, Markus Müller-Olm, and Christoph Ohrem
D.3 Proof of Lemma 6.3
Proof.
The three claims follow from the observation that the analyses produce synchronous, 𝑘 -synchronous and 𝑘 -context-bounded AAPA in combination with Theorem 3.11, Corollary 3.16and a further argument we present here.The mentioned argument is that the construction for quantifiers increases the size of the con-struction exponentially in the worst case for each alternation removal that is being performed.Since the Kripke Structure K is incorporated into the automaton only after the first alternationremoval, the complexity in the size of K is one exponent lower than in the size of 𝜑 . It remainsto show that the number of alternation removals is equal to the number of quantifier alternationsplus one rather than the number of quantifiers, which a naive look at the construction would im-ply. This is due to the fact that the construction results in an NPA and thus no further alternationis needed when no complementation constructions are performed in between the quantifier con-structions. For existential quantifiers one can see this directly while for universal quantifiers thisis due to the elimination of double negations. An outermost negation does not have to be includedinto the construction since flipping the result of the test performed on the automaton has the sameeffect.. It remainsto show that the number of alternation removals is equal to the number of quantifier alternationsplus one rather than the number of quantifiers, which a naive look at the construction would im-ply. This is due to the fact that the construction results in an NPA and thus no further alternationis needed when no complementation constructions are performed in between the quantifier con-structions. For existential quantifiers one can see this directly while for universal quantifiers thisis due to the elimination of double negations. An outermost negation does not have to be includedinto the construction since flipping the result of the test performed on the automaton has the sameeffect.