Adaptive Synchronisation of Pushdown Automata
AAdaptive Synchronisation of Pushdown Automata
A. R. Balasubramanian
Technische Universität München, Munich, Germany https://arbalan96.github.io/ [email protected]
K. S. Thejaswini
Department of Computer Science, University of Warwick, [email protected]
Abstract
We introduce the notion of adaptive synchronisation for pushdown automata, in which there isan external observer who has no knowledge about the current state of the pushdown automaton,but can observe the contents of the stack. The observer would then like to decide if it is possibleto bring the automaton from any state into some predetermined state by giving inputs to it inan adaptive manner, i.e., the next input letter to be given can depend on how the contents ofthe stack changed after the current input letter. We show that for non-deterministic pushdownautomata, this problem is 2-
EXPTIME -complete and for deterministic pushdown automata, we show
EXPTIME -completeness.To prove the lower bounds, we first introduce (different variants of) subset-synchronisation andshow that these problems are polynomial-time equivalent with the adaptive synchronisation problem.We then prove hardness results for the subset-synchronisation problems. For proving the upperbounds, we consider the problem of deciding if a given alternating pushdown system has an acceptingrun with at most k leaves and we provide an n O ( k ) time algorithm for this problem. Theory of computation → Grammars and context-free languages;Theory of computation → Problems, reductions and completeness
Keywords and phrases
Adaptive synchronisation, Pushdown automata, Alternating pushdownsystems
Funding
A. R. Balasubramanian : Supported by funding from the European Research Council(ERC) under the European Union’s Horizon 2020 research and innovation programme under grantagreement No 787367 (PaVeS).
Acknowledgements
We would like to thank Dmitri Chistikov for referring us to previous works onthis topic
The notion of a synchronizing word for finite-state machines is a classical concept in computerscience which consists of deciding, given a finite-state machine, whether there is a word whichbrings all of its states to a single state. Intuitively, assuming that we initially do not knowwhich state the machine is in, such a word synchronises it to a single state and assists inregaining control over the machine. This idea has been studied for many types of finite-statemachines [23, 21, 1, 8] with applications in biocomputing [2], planning and robotics [9, 18]and testing of reactive systems [17, 13]. In recent years, the notion of a synchronizing wordhas been extended to various infinite-state systems such as timed automata [7], registerautomata [19], nested word automata [6], pushdown and visibly pushdown automata [10, 11].In particular, for the pushdown case, Fernau, Wolf and Yamakami [11] have shown that thisproblem is undecidable even for deterministic pushdown automata.When the finite-state machine can produce outputs, the notion of synchronisation couldbe further refined to give rise to synchronisation under partial observation or adaptive a r X i v : . [ c s . F L ] F e b Adaptive Synchronisation of Pushdown Automata synchronisation (See Chapter 1 of [4] and [16]). In this setting, there is an external observerwho does not know the current state of the machine, however she can give inputs to themachine and observe the outputs given by the machine. Depending on the outputs of themachine, she can adaptively decide which input letter to give next. In this manner, theobserver would like to bring the machine into some predetermined state. Larsen, Laursenand Srba [16] describe an example of adaptive synchronisation pertaining to the orientationof a simplified model of satellites, in which they observe that adaptively choosing the inputletter is sometimes necessary in order to achieve synchronisation. In this paper, we extendthis notion of adaptive synchronisation to pushdown automata (PDA). In our model, theobserver does not know which state the PDA is currently in, but can observe the contents ofthe stack. She would then like to decide if it is possible to synchronise the PDA into somestate by giving inputs to the PDA adaptively, i.e., depending on how the stack changes aftereach input. To the best of our knowledge, the notion of adaptive synchronisation has notbeen considered before for any class of infinite-state systems.This question is a natural extension of the notion of adaptive synchronisation from finite-state machines to pushdown automata. Further, it is mentioned in the works of Lakhotia,Uday Kumar and Venable as well as Song and Touili [20, 15] that several antivirus systemsdetermine whether a program is malicious by observing the calls that the program makes tothe operating system. With this in mind, Song and Touili use pushdown automata [20] asabstractions of programs where a stack stores the calls made by the program and use thisabstraction to detect viruses. Hence, we believe that our setting of being able to observe thechanges happening to the stack can be practically motivated.Our main results regarding adaptive synchronisation are as follows: We show that fornon-deterministic pushdown automata, the problem is 2-
EXPTIME -complete. However,by restricting our input to deterministic pushdown automata, we show that we can get
EXPTIME -completeness, thereby obtaining an exponential reduction in complexity.We also consider a natural variant of this problem, called subset adaptive synchronisation ,which is similar to adaptive synchronisation, except the observer has more knowledge aboutwhich state the automaton is initially in. We obtain a surprising result that shows thatthis variant is polynomial-time equivalent to adaptive synchronisation, unlike in the caseof finite-state machines. Furthermore, for the deterministic case of this variant, we obtainan algorithm that runs in time O (cid:16) n ck (cid:17) where n is the size of the input and k is the sizeof the subset of states that the observer believes the automaton is initially in. This gives apolynomial time algorithm if k is fixed and a quasi-polynomial time algorithm if k = O (log n ).Used as a subroutine in the above decision procedure, is an O (cid:16) n ck (cid:17) time algorithm tothe following question, which we call the sparse-emptiness problem : Given an alternatingpushdown system and a number k , decide whether there is an accepting run of the systemwith at most k leaves. Intuitively, such a run means that the system has an accepting run inwhich it uses only “limited universal branching”. We note that such a notion of alternationwith “limited universal branching” has recently been studied by Keeler and Salomaa foralternating finite-state automata [14]. Our problem can be considered as a generalisation ofone of their problems (Corollary 2 of [14]) to pushdown systems. We think that this problemand its associated algorithm might be of independent interest. Roadmap:
In Section 2, we introduce notations. In Section 3, we discuss differentvariations of the problem. In Sections 4 and 5 we prove lower and upper bounds respectively.Due to lack of space, some of the proofs can be found in the appendix. . R. Balasubramanian and K. S. Thejaswini 3
Given a finite set X , we let X ∗ denote the set of all words with the alphabet X . As usual,the concatenation of two words x, y ∈ X ∗ is denoted by xy . We recall the well-known notion of a pushdown automaton. A pushdown automaton (PDA)is a 4-tuple P = ( Q, Σ , Γ , δ ) where Q is a finite set of states , Σ is the input alphabet , Γ isthe stack alphabet and δ ⊆ ( Q × Σ × Γ) × ( Q × Γ ∗ ) is the transition relation . Alternatively,sometimes we will describe the transition relation δ as a function Q × Σ × Γ Q × Γ ∗ . Wewill always use small letters a, b, c, . . . to denote elements of Σ, capital letters A, B, C, . . . todenote elements of Γ and Greek letters γ, η, ω, . . . to denote elements of Γ ∗ .If ( p, a, A, q, γ ) ∈ δ then we sometimes denote it by ( p, A ) a , −→ ( q, γ ). We say A is the top of the stack that is popped and γ is the string that is pushed onto the stack. A configuration of the automaton is a tuple ( q, γ ) where q ∈ Q and γ ∈ Γ ∗ . Given two configurations ( q, Aγ )and ( q , γ γ ) of P with A ∈ Γ, we say that ( q, Aγ ) a −→ ( q , γ γ ) iff ( q, A ) a , −→ ( q , γ ).As is usual, we assume that there exists a special bottom-of-the-stack symbol ⊥ ∈ Γ, suchthat whenever some transition pops ⊥ , it pushes it back in the bottom-most position. APDA is said to be deterministic if for every q ∈ Q , a ∈ Σ and A ∈ Γ, δ ( q, a, A ) has exactlyone element. If a PDA is deterministic, we further abuse notation and denote δ ( q, a, A ) as asingle element and not as a set. We first expand upon the intuition given in the introduction for adaptive synchronisationwith the help of a running example. Consider the pushdown automaton as given in Figure 1where we do not know which state the automaton is in currently, but we do know that thestack content is ⊥ . To synchronise the automaton to the state 4 when the stack is visible,the observer has a strategy as depicted in Figure 2. The labelling of the nodes of the treeintuitively denotes the ‘knowledge of the observer’ at the current point in the strategy andthe labelling of the edges denotes the letter that she inputs to the PDA. Initially, according tothe observer, the automaton could be in any one of the 4 states. The observer first inputs theletter (cid:50) . If the top of the stack becomes • , then she knows that the automaton is currentlyeither in state 1 or 2. On the other hand, if the top of the stack becomes • , then the observercan deduce that the automaton is currently in state 3 or 4. From these two scenarios, byfollowing the appropriate strategy depicted in the figure, we can see that she can synchronisethe automaton to state 4. However, if the stack was hidden to the observer, reading either (cid:51) or (cid:50) does not change the knowledge of the observer and therefore, there is no word that canbe read that would synchronise the automaton to any state.We now formalize the notion of an adaptive synchronizing word that we have so fardescribed. Let P = ( Q, Σ , Γ , δ ) be a PDA. Given S ⊆ Q , a ∈ Σ and A ∈ Γ, let T aS,A := { t ∈ δ | t = ( p, a, A, q, γ ) where p ∈ S } . Intuitively, if the observer knows that P is currentlyin some state in S and the top of the stack is A and she chooses to input a , then T aS,A isthe set of transitions that might take place. We define an equivalence relation ∼ aS,A onthe elements of T aS,A as follows: t ∼ aS,A t ⇐⇒ ∃ γ ∈ Γ ∗ such that t = ( p , a, A, q , γ )and t = ( p , a, A, q , γ ). Notice that if t ∼ aS,A t then the observer cannot distinguishoccurrences of t from occurrences of t . In our running example, if we take S = { , } , Adaptive Synchronisation of Pushdown Automata
12 43 (cid:50) , pop ( • / • ) (cid:51) • → ••• → •• (cid:50) , ⊥ → •⊥ (cid:50) , pop ( • / • ) (cid:51) • → ••• → •• (cid:50) , ⊥ → •⊥ (cid:50) , pop ( • / • ) (cid:51) • → ••• → •• (cid:50) , ⊥ → •⊥ (cid:50) , pop ( • / • ) (cid:51) • → ••• → •• (cid:50) , ⊥ → •⊥ Figure 1
A label of a, A → γ means thatif the input is a and if the top of the stack is A , then pop A and push γ . { , } { } { }{ } { }{ }{ , , , }{ , }{ , }{ , } { } (cid:50)(cid:50)(cid:50) (cid:51) (cid:51)(cid:51) (cid:51) Figure 2
A synchroniser between( { , , , } , ⊥ ) and state 4 for the PDA inFigure 1. a = (cid:51) and A = • , it is easy to see that T aS,A is { (3 , (cid:51) , • , , •• ) , (4 , (cid:51) , • , , •• ) } and these twotransitions are not in the same equivalence class under ∼ aS,A .The relation ∼ aS,A partitions the elements of T aS,A into equivalence classes. If E is anequivalence class of ∼ aS,A , then notice that there is a word γ ∈ Γ ∗ such that all the transitionsin E pop A and push γ onto the stack. This word γ will be denoted by word ( E ). If wedefine next ( E ) := { q | ( p, a, A, q, word ( E )) ∈ E } , then next ( E ) contains all the states thatthe automaton can move to if any of the transitions from E occur. Now, suppose the observerknows that P is currently in some state in S with A being at the top of the stack. Assumingshe inputs the letter a and observes that A has been popped and word ( E ) has been pushed,she can deduce that P is currently in some state in next ( E ). In our running example of S = { , } , a = (cid:51) and A = • , there are two equivalence classes E = { (3 , (cid:51) , • , , •• ) } and E = { (4 , (cid:51) , • , , •• ) } with next ( E ) = { } , next ( E ) = { } , word ( E ) = {••} and word ( E ) = {••} .A pseudo-configuration of the automaton P is a pair ( S, γ ) such that S ⊆ Q and γ ∈ Γ ∗ . The pseudo-configuration ( S, γ ) captures the knowledge of the observer at any givenpoint. Given a pseudo-configuration (
S, Aγ ) and an input letter a , let Succ ( S, Aγ, a ) := { ( next ( E ) , word ( E ) γ ) , . . . , ( next ( E k ) , word ( E k ) γ ) } where E , . . . , E k are the equivalenceclasses of ∼ aS,A . Each element of Succ ( S, Aγ, a ) will be called a possible successor of (
S, Aγ )under the input letter a . The function Succ captures all the possible pseudo-configurationsthat could happen when the observer inputs a at the pseudo-configuration ( S, Aγ ).We now define the notion of a synchroniser which will correspond to a strategy for theobserver to synchronise the automaton into some state. Let I ⊆ Q, s ∈ Q and γ ∈ Γ ∗ . (The I stands for I nitial set of states , and the s stands for s ynchronising state ). A synchroniser between the pseudo-configuration ( I, γ ) and the state s , is a labelled tree T such thatAll the edges are labelled by some input letter a ∈ Σ such that, for every vertex v , all itsoutgoing edges have the same label.The root is labelled by the pseudo-configuration ( I, γ ).Suppose v is a vertex which is labelled by the pseudo-configuration ( S, Aη ). Let a bethe unique label of its outgoing edges and let Succ ( S, Aη, a ) be of size k . Then v has k . R. Balasubramanian and K. S. Thejaswini 5 children, with the i th child labelled by the i th pseudo-configuration in Succ ( S, Aη, a ).For every leaf, there exists η ∈ Γ ∗ such that its label is ( { s } , η ).In addition, if all the leaves are labelled by ( { s } , ⊥ ), then T is called a super-synchroniser between ( I, γ ) and s . We use the notation ( I, γ ) = ⇒ P s (resp. ( I, γ ) sup == ⇒ P s ) to denote thatthere is a synchroniser (resp. super-synchroniser) between ( I, γ ) and s in the PDA P . (When P is clear from context, we would drop it from the arrow notation). We now formally introduce the problem which we will refer to as adaptive synchronisingproblem ( Ada-Sync ) and it is defined as the following:
Given:
A PDA P = ( Q, Σ , Γ , δ ) and a word γ ∈ Γ ∗ Decide:
Whether there is a state s such that ( Q, γ ) ⇒ s The
Det-Ada-Sync problem is the same as
Ada-Sync , except that the given pushdownautomaton is deterministic. Notice that we can generalise the adaptive synchronising problemby the following subset adaptive synchronising problem ( Subset-Ada-Sync ): Given a PDA P = ( Q, Σ , Γ , δ ), a subset I ⊆ Q and a word γ ∈ Γ ∗ , decide if there is a state s such that( I, γ ) ⇒ s . Similarly, we can define Det-Subset-Ada-Sync . (cid:73) Remark 1.
One can also frame both of these problems in various other ways such as “Given P , γ and q does ( Q, γ ) ⇒ q ?” or “Given P , γ, I , is there a q such that ( I, γ ) sup == ⇒ q ” etc. Wechose this version, because this is similar to the way it is defined for the finite-state version(Problem 1 of [16]). Nevertheless, in order to make the lower bounds easier to understand,we introduce a few different variants of Ada-Sync and
Subset-Ada-Sync in Section 3 andconclude that that they are all polynomial-time equivalent with
Ada-Sync . We defer adetailed analysis of the different variants of this problem to future work. (cid:73)
Remark 2.
One can relax the notion of a synchroniser and ask instead for an adaptive“homing” word, which is the same as a synchroniser, except that we now only require that if(
S, γ ) is the label of a leaf then S is any singleton. Intuitively, in an adaptive homing word,we are content with knowing the state the automaton is in after applying the strategy, ratherthan enforcing the automaton to synchronise into some state. Due to lack of space, we statethis problem formally and prove in the appendix that it is polynomial-time equivalent to Ada-Sync . In the main paper, we primarily focus on finding the complexity status of theproblems
Ada-Sync and
Subset-Ada-Sync .The main results of this paper are now as follows: (cid:73)
Theorem 3.
Ada-Sync and
Subset-Ada-Sync are both - EXPTIME -complete.
Det-Ada-Sync and
Det-Subset-Ada-Sync are both
EXPTIME -complete.
In this section, we show that the problems
Ada-Sync and
Subset-Ada-Sync are polynomial-time equivalent to each other. A similar result is also shown for their correspondingdeterministic versions. We note that such a result is not true for finite-state (Moore)machines (Table 1 of [16]) and so we provide a proof of this here, because it illustrates thesignificance of the stack in the pushdown version. (cid:73)
Lemma 4.
Ada-Sync (resp.
Det-Ada-Sync ) is polynomial time equivalent to
Subset-Ada-Sync (resp.
Det-Subset-Ada-Sync ). Adaptive Synchronisation of Pushdown Automata
Proof.
It suffices to show that
Subset-Ada-Sync (resp.
Det-Subset-Ada-Sync ) can bereduced to
Ada-Sync (resp.
Det-Ada-Sync ) in polynomial time.Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q and γ ∈ Γ ∗ . Let q I be some fixed state inthe subset I . Construct P from P by adding a new stack letter a ∈ Σ, if the top of the stack is q ∈ I pops q whereas any state q / ∈ I pops q I . Notice that P isdeterministic if P is.It is clear that if ( I, γ ) = ⇒ P s for some state s , then ( Q, γ ) == ⇒ P s . We now claim thatthe other direction is true as well. To see this, suppose there is a synchroniser in P (say T ) between ( Q, γ ) and some state s . It is easy to see that, irrespective of the label of theoutgoing edge from the root of T , there is only one child of the root which is labelled by( I, γ ). Now, no transition pushes T , we get a synchroniser between ( I, γ ) and s in P . (cid:74) Lemma 4 allows us to introduce a series of problems which we can prove are poly-timeequivalent to
Ada-Sync . The reason to consider these problems is that lower bounds forthese are substantially easier to prove than for
Ada-Sync . The three problems are as follows: Given-Sync : Given a PDA P , a subset I , a word γ and also a state s , check if ( I, γ ) ⇒ s . Super-Sync has the same input as
Given-Sync , except we ask if (
I, γ ) sup == ⇒ s . Special-Sync is the same as
Super-Sync but restricted to inputs where γ is ⊥ . (cid:73) Lemma 5.
Subset-Ada-Sync , Given-Sync , Super-Sync and
Special-Sync are allpoly. time equivalent. Further the same applies for their corresponding deterministic versions.
Because of this lemma, for the rest of this paper, we will only be concerned with the
Special-Sync problem, where given a PDA P , a subset I and a state s , we have to decideif ( I, ⊥ ) sup == ⇒ s . To prove the lower bounds, we introduce the notion of an alternating extended pushdownsystem (AEPS), which is an extension of pushdown systems with Boolean variables andalternation. An alternating extended pushdown system (AEPS) A is a tuple ( Q, V, Γ , ∆ , init , fin ) where Q and V are finite sets of states and Boolean variables respectively, Γ is the stack alphabet, init , fin ∈ Q are the initial and final states respectively. A has no input letters but it has astack to which it can pop and push letters from Γ. Each variable in V is of Boolean type anda transition of A could apply simple tests on these variables and depending on the outcome,can update their values. A configuration of A is a tuple ( q, γ, F ) where q ∈ Q, γ ∈ Γ ∗ and F : V → { , } is a function assigning a Boolean value to each variable.Let test denote the set of tests given by { v ? = b : v ∈ V, b ∈ { , }} and let cmd denote the set of commands given by { v b : v ∈ V, b ∈ { , }} . A consistent com-mand is a conjunction of elements from cmd such that for every v ∈ V , both v v cmd . The transition relation ∆ consists of transitions of the form( q, A, G ) , → { ( q , γ , C ) , . . . , ( q k , γ k , C k ) } where q, q , . . . , q k ∈ Q , A ∈ Γ , γ , . . . , γ k ∈ Γ ∗ , G . R. Balasubramanian and K. S. Thejaswini 7 is a conjunction of elements from test and each C i is a consistent command. Intuitively, ata configuration ( q, Aγ, F ) the machine non-deterministically selects a transition of the form( q, A, G ) , → { ( q , γ , C ) , . . . , ( q k , γ k , C k ) } such that the assignment F satisfies the conjunc-tion G and then forks into k copies in the configurations ( q , γ γ, F [ C ]) , . . . , ( q k , γ k γ, F [ C k ])where F [ C i ] is the function obtained by updating F according to the command C i . Withthis intuition in mind, we say that a transition ( q, A, G ) , → { ( q , γ , C ) , . . . , ( q k , γ k , C k ) } isenabled at a configuration ( p, Bγ, F ) iff p = q, B = A and F satisfies all the tests in G .A run from a configuration ( q, η, H ) to a configuration ( q , η , H ) is a tree satisfyingthe following properties: The root is labelled by ( q, η, H ). If some internal node n is labelled by( p, Aγ, F ) then there exists a transition ( p, A, G ) , → { ( p , γ , C ) , ( p , γ , C ) , . . . , ( p k , γ k , C k ) } which is enabled at ( p, Aγ, F ) such that the children of n are labelled by ( p , γ γ, F [ C ]), . . . ,( p k , γ k γ, F [ C k ]), where F [ C i ]( v ) = b if C i contains a command of the form v b and F [ C i ]( v ) = F ( v ) otherwise. Finally all the leaves are labelled by ( q , η , H ). If a run existsbetween ( q, η, H ) and ( q , η , H ) then we denote it by ( q, η, H ) ∗ −→ A ( q , η , H ). An acceptingrun from a configuration ( q, η, H ) is a run from ( q, η, H ) to ( fin , ⊥ , ) where is the zerofunction. An accepting run of an AEPS is simply an accepting run from the initial configur-ation ( init , ⊥ , ). The emptiness problem is then to decide whether a given AEPS has anaccepting run.By a simple adaptation of the EXPTIME -hardness proof for emptiness of alternatingpushdown systems which have no Boolean variables (Theorem 5.4 of [5], Prop. 31 of [22])we prove that (cid:73)
Lemma 6.
The emptiness problem for AEPS is - EXPTIME -hard.
An AEPS A is called a non-deterministic extended pushdown system (NEPS) if everytransition of A is of the form ( p, A, F ) , → { ( q, γ, C ) } . By Theorem 2 of [12] we have that (cid:73) Lemma 7.
The emptiness problem for NEPS is
EXPTIME -hard. (cid:73)
Remark 8.
The hardness result for AEPS could also be inferred from Theorem 10 of [12].Because we use a different notation, for the sake of completeness, we provide the proofs ofboth of these lemmas in the appendix.
We now give a reduction from the emptiness problem for AEPS to
Special-Sync . Let A = ( Q, V, Γ , ∆ , init , fin ) be an AEPS. Without loss of generality, we can assume that if( q, A, G ) , → { ( q , γ , C ) , ( q , γ , C ) , . . . , ( q k , γ k , C k ) } ∈ ∆, then γ i = γ j for i = j . (This canbe accomplished, by prefixing new characters to each γ i , moving to some intermediate statesand then popping the new characters and moving to the respective q i ’s). Having made thisassumption, the reduction is described below.From the given AEPS A , we now construct a pushdown automaton P as follows. Thestack alphabet of P will be Γ. For each transition t ∈ ∆, P will have an input letter in ( t ). P will also have another input letter end . The state space of P will be the set Q ∪ ( V × { , } ) ∪ { q acc , q rej } , where q acc and q rej are two states, which on reading any inputletter, will leave the stack untouched and simply stay at q acc and q rej respectively.We now give an intuition behind the transitions of P . Given an assignment F : V → { , } of the Boolean variables V , and a state q of A , we use the notation [ q, F ] to denote thesubset { q } ∪ { ( v, F ( v )) : v ∈ V } of states of P . Intuitively, a configuration ( q, γ, F ) of A is simulated by its corresponding pseudo-configuration ([ q, F ] , γ ) in P . This intuition iscaptured by Figure 3, which gives an example of a step in P . Adaptive Synchronisation of Pushdown Automata ,q q q v , v , v , q acc q rej v , v , v , q q q v , v , v , q acc q rej v , v , v , q q v , v , v , q acc q rej v , v , v , ⊥ ABA ⊥ AB ⊥ ABBA q Figure 3
Let t be the transition ( q , A, [ v ? = 0 , v ? = 1]) , → { ( q , AB, [ v ← , v ← , ( q , (cid:15), [ v ← } in A . In A , using t , the configuration C := ( q , ABA ⊥ , [ v = 0 , v = 1 , v = 1])can fork into C := ( q , ABBA ⊥ , [ v = 1 , v = 0 , v = 1]) and ( C := q , BA ⊥ , [ v = 0 , v =0 , v = 1]). The diagram here illustrates the simulation of this forking on the correspondingpseudo-configurations of C , C , C that the automaton P will achieve when reading the letter in ( t ). Now we give a formal description of the transitions of P . Let t = ( q, A, G ) , →{ ( q , γ , C ) , . . . , ( q k , γ k , C k ) } be a transition of A . Let p ∈ Q . Upon reading in ( t ), if p = q then p immediately moves to the q rej state. Further, even state q moves to the q rej state if the top of the stack is not A . However, if the top of the stack is A , then q pops A and non-deterministically pushes any one of γ , . . . , γ k onto the stack and if it pushed γ i ,then q moves to the state q i .Let ( v, b ) ∈ V × { , } . Upon reading in ( t ), if the test v ? = (1 − b ) appears in the guard G , then ( v, b ) immediately moves to the q rej state. (Notice that this is a purely syntacticalcondition on A ). Further, if the top of the stack is not A , then once again ( v, b ) moves to q rej . If these two cases do not hold, then ( v, b ) pops A and non-deterministically picks an i ∈ { , . . . k } and pushes γ i onto the stack. Having pushed γ i , if C i does not update thevariable v , it stays in state ( v, b ); otherwise if C i has a command v b , it moves to ( v, b ).Finally, upon reading end , the states in [ fin , ] move to the q acc state and all the otherstates in Q ∪ ( V × { , } ) move to the q rej state.We now claim that A has an accepting run iff there is a super-synchronizer in P between([ init , ] , ⊥ ) and q acc . Intuitively, any accepting run of A can be simulated by the corres-ponding pseudo-configurations in a manner similar to Figure 3 and once we arrive at thepseudo-configuration ([ fin , ] , ⊥ ), we can input the letter end and synchronise to the state q acc . For the reverse direction, we can show that any super-synchroniser between ([ init , ] , ⊥ )and q acc must be a simulation of an accepting run in A .Notice that P is deterministic if A is non-deterministic. Hence, by Lemmas 6 and 7, (cid:73) Theorem 9.
Special-Sync , Subset-Ada-Sync and
Ada-Sync are all - EXPTIME -hard.
Det-Special-Sync , Det-Subset-Ada-Sync and
Det-Ada-Sync are all
EXPTIME -hard.
In this section, we will give algorithms that solve
Special-Sync and
Det-Special-Sync .We first give a reduction from
Special-Sync to the problem of checking emptiness inan alternating pushdown system, which we define below. Then, we show that for
Det-Special-Sync , the same reduction produces alternating pushdown systems with a “modular”structure, which we exploit to reduce the running time. . R. Balasubramanian and K. S. Thejaswini 9
An alternating pushdown system (APS) is an alternating extended pushdown system whichhas no Boolean variables. Since there are no variables, we can suppress any notationcorresponding to the variables, e.g., configurations can be just denoted by ( q, γ ). It is knownthat the emptiness problem for APS is in
EXPTIME (Theorem 4.1 of [3]). We now give anexponential time reduction from
Special-Sync to the emptiness problem for APS.Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q , s ∈ Q . Construct the following APS A P = (2 Q , Γ , ∆ , I, { s } ) where ∆ is defined as follows: Given S ⊆ Q, a ∈ Σ and A ∈ Γ, let E , . . . , E k be the equivalence classes of the relation ∼ aS,A as defined in subsection 2.2. Then,we have the following transition in A P :( S, A ) , → { ( next ( E ) , word ( E )) , ( next ( E ) , word ( E )) , . . . , ( next ( E k ) , word ( E k ) } (1)The following fact is immediate from the definition of a super-synchroniser and from theconstruction of A P . (cid:73) Proposition 10.
Let S ⊆ Q , γ ∈ Γ ∗ . Then a labelled tree T is a super-synchroniserbetween ( S, γ ) and s in P if and only if T is an accepting run from ( S, γ ) in A P . By Theorem 4.1 of [3], emptiness for APS can be solved in exponential time and so (cid:73)
Theorem 11.
Special-Sync is in - EXPTIME
Let P = ( Q, Σ , Γ , δ ) be a deterministic PDA with I ⊆ Q, s ∈ Q . We have the followingproposition, whose proof follows from the fact that P is deterministic. (cid:73) Proposition 12.
Suppose S ⊆ Q, a ∈ Σ , A ∈ Γ and suppose E , . . . , E k are the equivalenceclasses of ∼ aS,A . Then, | S | ≥ P ki =1 | next ( E i ) | . Now, given P , consider the APM A P = (2 Q , Γ , ∆ , I, { s } ) that we have constructed insubsection 5.1. By Proposition 12, we now have the following lemma. (cid:73) Lemma 13.
For any S ∈ Q , γ ∈ Γ ∗ , any accepting run of A P from the configuration ( S, γ ) has at most | S | leaves. The following corollary follows from the lemma above. (cid:73)
Corollary 14.
Any accepting run of A P has at most | I | leaves. (cid:73) Example 15.
Let P be the deterministic PDA from Figure 1. Figure 4 shows an exampleof an accepting run in the corresponding APS A P from I := { , , , } . Notice that thereare | I | = 4 leaves in this run.Corollary 14 motivates the study of the following problem, which we call the sparseemptiness problem for APMs ( Sparse-Empty ): Given:
An APM A and a number k in unary. Decide:
Whether there exists an accepting run for A with at most k leavesWe prove the following theorem about Sparse-Empty in the next section. (cid:73)
Theorem 16.
Given A and k , the Sparse-Empty problem can be solved in time O ( |A| ck ) for a fixed constant c . { , } { }{ }{ , , , } { , }{ , }{ , } { }{ }{ }{ } { }{ }{ }{ }{ }{ }{ } { }{ }{ } Figure 4
An accepting run of A P for thedeterministic PDA P given in Figure 1. { }{ , , , } { , }{ , }{ , } { }{ }{ } { }{ } { }{ } Figure 5
A compressed accepting runof A P for the deterministic PDA P givenin Figure 1, obtained by compressing therun from Figure 4 Now, because of Proposition 12 and because of the structure of the transitions of A P (asgiven by equation (1)), it is sufficient to restrict the construction of A P to only those stateswhich have cardinality at most | I | and hence, it can be assumed that |A P | ≤ |P| | I | . Thisfact, along with Proposition 10, corollary 14 and Theorem 16 implies the following theorem. (cid:73) Theorem 17.
Given an instance ( P , I, s ) of Det-Special-Sync , we can check if ( I, ⊥ ) sup == ⇒ P s in time O ( n ck ) where n = |P| and k = | I | and c is some fixed constant. (cid:73) Remark 18.
Note that the algorithm to solve
Det-Special-Sync on an instance ( P , I, s ),although in EXPTIME , is polynomial if | I | is fixed and quasi-polynomial if | I | = O (log |P| ). This subsection is dedicated to proving Theorem 16. We fix an alternating pushdownsystem A = ( Q, Γ , ∆ , init , fin ) and a number k for the rest of this subsection. A k -acceptingrun of A is defined to be an accepting run of A with at most k leaves. We now split thedesired algorithm for Sparse-Empty into three parts. Finally, we give its runtime analysis.
Compressing k -accepting runs of A : We define a non-deterministic pushdown system(NPS) to be a non-deterministic extended pushdown system which has no Boolean variables.From A , we can derive a NPS obtained by deleting all transitions of the form ( q, A ) , →{ ( q , γ ) , . . . , ( q k , γ k ) } with k >
1. We will denote this NPS by N . Emptiness of NPS isknown to be solvable in polynomial time (Theorem 2.1 of [3]). To exploit this fact for ourproblem, we propose the following notion of a compressed accepting run of A . Intuitively, acompressed accepting run is obtained from an accepting run of A by “compressing” a series oftransitions belonging to the non-deterministic part N , into a single transition. An intuitionof a compressed accepting run is captured by Figure 5, which is obtained by compressing therun depicted in Figure 4. . R. Balasubramanian and K. S. Thejaswini 11 Given a tree, we say that a vertex v in the tree is simple if it has exactly one childand otherwise we say that it is complex (Note that all leaves are complex). A compressedaccepting run of A from the configuration ( p, η ) is a labelled tree such that: The root islabelled by ( p, η ). If v is a simple vertex labelled by ( q, γ ) and u is its only child labelledby ( q , γ ) then u is a complex vertex and ( q, γ ) ∗ −→ N ( q , γ ). If v is a complex vertexlabelled by ( q, Aγ ) and v , . . . , v k are its children with k >
1, then there is a transition( q, A ) , → { ( q , A ) , . . . , ( q k , A k ) } in A such that the label of v i is ( q i , A i γ ). Finally, all theleaves are labelled by ( fin , ⊥ ). A compressed accepting run of A is a compressed acceptingrun from ( init , ⊥ ) and a k -compressed accepting run is a compressed accepting run with atmost k leaves. We now have the following lemma. (cid:73) Lemma 19.
There is a k -accepting run of A from a configuration ( p, η ) iff there is a k -compressed accepting run of A from ( p, η ) . Searching for k -compressed accepting runs: To fully use the result of Lemma 19,we need some results about non-deterministic pushdown systems, which we state here. Recallthat N is an NPS over the states Q and stack alphabet Γ obtained from the APS A . We saythat M = ( Q M , Γ , δ M , F M ) is an N -automaton if M is a non-det. finite-state automatonover the alphabet Γ with accepting states F M such that for each state q ∈ Q , there is aunique state q M ∈ Q M . The set of configurations of A that are stored by M (denoted by C ( M )) is defined to be the set { ( q, γ ) : γ is accepted in M from the state q M } . In the abovedefinition, note that Q M can potentially have more states other than the set { q M | q ∈ Q } . (cid:73) Theorem 20. (Section 2.3 and Theorem 2.1 of [3]) Given an N -automaton M , in timepolynomial in N and M , we can construct an N -automaton M which has the same statesas M such that M stores the set of predecessors of M , i.e., C ( M ) = { ( q , γ ) : ∃ ( q, γ ) ∈C ( M ) such that ( q , γ ) ∗ −→ N ( q, γ ) } . We say that an unlabelled tree is structured , if the child of every simple vertex is acomplex vertex. An ‘ -structured tree is simply a structured tree which has at most ‘ leaves.Notice that the height of an ‘ -structured tree is O ( ‘ ) and since it has at most ‘ leaves, itfollows that a ‘ -structured tree can be described using a polynomial number of bits in ‘ .Hence, the number of ‘ -structured trees is O (2 ‘ c ) for some fixed c .Now let us come back to the problem of searching for k -accepting runs of A . ByLemma 19 it suffices to search for a k -compressed accepting run of A . Notice that if we take a k -compressed accepting run and remove its labels, we get a k -structured tree. Now, supposewe have an algorithm Check that takes a k -structured tree T and checks if T can be labelledto make it a k -compressed accepting run of A . Then, by calling Check on every k -structuredtree, we have an algorithm to check for the existence of a k -compressed accepting run of A .Hence, it suffices to describe this procedure Check which is what we will do now.
The algorithm
Check : Let T be a k -structured tree. For each vertex v in the tree T , Check will assign a N -automaton M v such that M v will have the following property:Invariant (*) : A configuration ( q, γ ) ∈ C ( M v ) iff all the vertices of the subtree rootedat v can be labelled such that the resulting labelled subtree is a compressed acceptingrun of A from ( q, γ ).The construction of each M v is as follows: Let Q be the states and ∆ be the transitionsof the alternating pushdown system A .Suppose vertex v is a leaf. We let M v be an automaton such that C ( M v ) = { ( fin , ⊥ ) } .Notice that such a M v can be easily constructed in polynomial time. Suppose vertex v is simple and u is its child. We take M u and use Theorem 20 toconstruct the N -automaton M v . Note that M v has the same set of states as M u .Suppose v is complex and suppose v , . . . , v ‘ are its children. For each 1 ≤ i ≤ ‘ and for every configuration ( q, γ ) of A , let δ i ( q M vi , γ ) denote the set of states that theautomaton M v i will be in after reading γ from the state q M vi . To construct M v firstdo a product construction which we denote by M v × M v × · · · × M v ‘ . Then, for each q ∈ Q , add a state q M v . Then for each transition ( p, A ) , → { ( p , γ ) , . . . , ( p ‘ , γ ‘ ) } in∆, add a transition in M v , which upon reading A , takes p M v to any of the states in δ ( p M v , γ ) × δ ( p M v , γ ) × · · · × δ l ( p ‘M v‘ , γ ‘ ). Intuitively, we accept a word Aγ fromthe state p M v if for each i , the word γ i γ can be accepted from the state p iM vi . (cid:73) Proposition 21.
For each vertex v of the tree T , M v satisfies invariant (*) Finally, we accept iff ( init , ⊥ ) ∈ C ( M r ) where r is the root of the tree. The correctness of Check follows from the proposition above.
Running time analysis
Let us analyse the running time of
Check . Let T be a k -structured tree and therefore T has O (cid:0) k (cid:1) vertices. Check assigns to each vertex v of T an automaton M v . We claim that therunning time of Check is O (cid:16) k · |A| ck (cid:17) (for some constant c ) because of the following:1) By induction on the structure of the tree T , it can be proved that, there exists a constant d , such that if h v is the height of a vertex v and l v is the number of leaves in the subtreeof v , then the number of states of M v is O (cid:0) |A| dh v l v (cid:1) (Recall that h v l v is at most O (cid:0) k (cid:1) ).2) If an N -automaton has n states, then the number of transitions it can have is O (cid:0) |A| · n (cid:1) .3) For a vertex v with children v , . . . , v ‘ , M v can be constructed in polynomial time in thesize of | M v | × | M v | × . . . | M v ‘ | and |A| .Now the final algorithm for Sparse-Empty simply iterates over all k -structured treesand calls Check on all of them. Since the number of k -structured trees is at most f ( k ) where f is an exponential function, it follows that the total running time is O (cid:16) f ( k ) · k · |A| ck (cid:17) = O (cid:16) |A| ek (cid:17) for some constant e . Our results can be considered as a step in the research direction recently proposed by Fernau,Wolf and Yamakami in [11], in which the authors prove that the synchronisation problem forPDAs is undecidable when the stack is not visible. They also suggest looking into differentvariants of synchronisation for PDAs with a view towards the decidability and complexityfrontier. Within this context, we believe we have proposed a natural variant of synchronisationin which the observer can see the stack and given decidability and complexity-theoreticoptimal results for both the non-deterministic and the deterministic cases.One can ask similar questions to almost any automata model, which has a “data” partthat can be observed, for example, timed automata. Another natural question is to considerthe same questions for one-counter automata, i.e., pushdown automata with a single stackalphabet. Though our results imply decidability for this case, further investigation into itscomplexity is needed. Finally, one can impose constraints on pushdown automata that areless constrained than determinism, but do not imply full non-determinism, like unambiguityor being Good-for-games. . R. Balasubramanian and K. S. Thejaswini 13
References Marie-Pierre Béal, Eugen Czeizler, Jarkko Kari, and Dominique Perrin. Unambiguous automata.
Math. Comput. Sci. , 1(4):625–638, 2008. doi:10.1007/s11786-007-0027-1 . Yaakov Benenson, Rivka Adar, Tamar Paz-Elizur, Zvi Livneh, and Ehud Shapiro. Dnamolecule provides a computing machine with both data and fuel.
Proceedings of the NationalAcademy of Sciences , 100(5):2191–2196, 2003. Ahmed Bouajjani, Javier Esparza, and Oded Maler. Reachability analysis of pushdownautomata: Application to model-checking. In Antoni W. Mazurkiewicz and Józef Winkowski,editors,
CONCUR ’97: Concurrency Theory, 8th International Conference, Warsaw, Poland,July 1-4, 1997, Proceedings , volume 1243 of
Lecture Notes in Computer Science , pages 135–150.Springer, 1997. doi:10.1007/3-540-63141-0\_10 . Manfred Broy, Bengt Jonsson, Joost-Pieter Katoen, Martin Leucker, and Alexander Pretschner,editors.
Model-Based Testing of Reactive Systems, Advanced Lectures [The volume is theoutcome of a research seminar that was held in Schloss Dagstuhl in January 2004] , volume3472 of
LNCS . Springer, 2005. Ashok K. Chandra, Dexter Kozen, and Larry J. Stockmeyer. Alternation.
J. ACM , 28(1):114–133, 1981. doi:10.1145/322234.322243 . Dmitry Chistikov, Pavel Martyugin, and Mahsa Shirmohammadi. Synchronizing automata overnested words.
J. Autom. Lang. Comb. , 24(2-4):219–251, 2019. doi:10.25596/jalc-2019-219 . Laurent Doyen, Line Juhl, Kim Guldstrand Larsen, Nicolas Markey, and Mahsa Shirmoham-madi. Synchronizing words for weighted and timed automata. In Venkatesh Raman andS. P. Suresh, editors, ,volume 29 of
LIPIcs , pages 121–132. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2014. doi:10.4230/LIPIcs.FSTTCS.2014.121 . Laurent Doyen, Thierry Massart, and Mahsa Shirmohammadi. The complexity of synchronizingmarkov decision processes.
J. Comput. Syst. Sci. , 100:96–129, 2019. doi:10.1016/j.jcss.2018.09.004 . David Eppstein. Reset sequences for monotonic automata.
SIAM J. Comput. , 19(3):500–510,1990. Henning Fernau and Petra Wolf. Synchronization of deterministic visibly push-down automata.In Nitin Saxena and Sunil Simon, editors, , volume182 of
LIPIcs , pages 45:1–45:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.FSTTCS.2020.45 . Henning Fernau, Petra Wolf, and Tomoyuki Yamakami. Synchronizing deterministic push-downautomata can be really hard. In Javier Esparza and Daniel Král’, editors, , volume 170 of
LIPIcs , pages 33:1–33:15. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.MFCS.2020.33 . Patrice Godefroid and Mihalis Yannakakis. Analysis of boolean programs. In Nir Pitermanand Scott A. Smolka, editors,
Tools and Algorithms for the Construction and Analysis ofSystems - 19th International Conference, TACAS 2013, Held as Part of the European JointConferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24,2013. Proceedings , volume 7795 of
Lecture Notes in Computer Science , pages 214–229. Springer,2013. doi:10.1007/978-3-642-36742-7\_16 . F. C. Hennine. Fault detecting experiments for sequential circuits. In , pages 95–110, 1964. Chris Keeler and Kai Salomaa. Alternating finite automata with limited universal branching. InAlberto Leporati, Carlos Martín-Vide, Dana Shapira, and Claudio Zandron, editors,
Languageand Automata Theory and Applications - 14th International Conference, LATA 2020, Milan,
Italy, March 4-6, 2020, Proceedings , volume 12038 of
Lecture Notes in Computer Science ,pages 196–207. Springer, 2020. doi:10.1007/978-3-030-40608-0\_13 . Arun Lakhotia, Eric Uday Kumar, and Michael Venable. A method for detecting obfuscatedcalls in malicious binaries.
IEEE transactions on Software Engineering , 31(11):955–968, 2005. Kim Guldstrand Larsen, Simon Laursen, and Jirí Srba. Synchronizing strategies under partialobservability. In Paolo Baldan and Daniele Gorla, editors,
CONCUR 2014 - ConcurrencyTheory - 25th International Conference, CONCUR 2014, Rome, Italy, September 2-5, 2014.Proceedings , volume 8704 of
Lecture Notes in Computer Science , pages 188–202. Springer,2014. doi:10.1007/978-3-662-44584-6\_14 . D. Lee and M. Yannakakis. Principles and methods of testing finite state machines-a survey.
Proceedings of the IEEE , 84(8):1090–1123, 1996. Balas K Natarajan. An algorithmic approach to the automated design of parts orienters. In , pages 132–142.IEEE, 1986. Karin Quaas and Mahsa Shirmohammadi. Synchronizing data words for register automata.
ACM Trans. Comput. Log. , 20(2):11:1–11:27, 2019. doi:10.1145/3309760 . Fu Song and Tayssir Touili. Pushdown model checking for malware detection.
Int. J. Softw.Tools Technol. Transf. , 16(2):147–173, 2014. doi:10.1007/s10009-013-0290-1 . Mikhail V. Volkov. Synchronizing automata and the cerny conjecture. In Carlos Martín-Vide,Friedrich Otto, and Henning Fernau, editors,
Language and Automata Theory and Applications,Second International Conference, LATA 2008, Tarragona, Spain, March 13-19, 2008. RevisedPapers , volume 5196 of
Lecture Notes in Computer Science , pages 11–27. Springer, 2008. doi:10.1007/978-3-540-88282-4\_4 . Igor Walukiewicz. Pushdown processes: Games and model-checking.
Inf. Comput. , 164(2):234–263, 2001. doi:10.1006/inco.2000.2894 . Ján Černý. Poznámka k homogénnym experimentom s konečnými automatmi.
Matematicko-fyzikálny časopis , 14(3):208–216, 1964.
A Proofs for Section 3 (cid:73)
Remark 22.
For the sake of brevity, for some tuples ( q, a, A ) ∈ Q × Σ × Γ, we willsometimes not specify δ ( q, a, A ). In such cases, it is to be assumed that δ ( q, a, A ) = ( q, A ).Also, occasionally we will describe transitions by saying “Upon reading a , the state p movesto q ”, without describing the changes to the stack. In these cases, it is to be assumed thatirrespective of the element at the top of the stack, the state p moves to q and does not changethe stack. (cid:73) Lemma 5.
Subset-Ada-Sync , Given-Sync , Super-Sync and
Special-Sync are allpoly. time equivalent. Further the same applies for their corresponding deterministic versions.
We break the proof of this Lemma into various propositions, each one showing equivalencebetween a pair of problems. (cid:73)
Proposition 23.
Subset-Ada-Sync and
Given-Sync are polynomial time equivalent.Further the same applies for their corresponding deterministic versions.
Proof.
Subset-Ada-Sync is reducible to Given-Sync
Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q and γ ∈ Γ ∗ . The central idea is that the newPDA P that we construct will force the observer to initially decide on which state of P she wants to synchronise in, by inputting a special letter specific to each state of P . Onceshe has done that, this choice will be remembered in the states of P . Finally, only when . R. Balasubramanian and K. S. Thejaswini 15 she believes she has synchronised in the state that she had chosen initially, she can input aspecial letter which will take her to a special state q acc .This can be concretely implemented as follows: The states of P will be ( Q × Q ) ∪ ( Q ×{ (cid:44) } ) ∪ { q acc , q rej } . The input alphabet of P will be Σ ∪ { decide q : q ∈ Q } ∪ { done q : q ∈ Q } .The transition relation δ of P is as follows: Upon reading decide q , the state ( p, (cid:44) ) movesto the state ( p, q ). Upon reading any letter from Σ, the transitions on states ( p, q ) ∈ Q × Q just mimic the transitions of δ on the first co-ordinate and leave the second co-ordinateunchanged. Upon reading done q , the states ( q, q ) and q acc move to q acc , while all the otherstates move to q rej . Notice that P is deterministic if P is. We now claim that ∃ s ∈ Q with ( I, γ ) = ⇒ P s if and only if ( I × { (cid:44) } , γ ) == ⇒ P q acc .( ⇒ ) Suppose there exists s ∈ Q such that there is a synchroniser between ( I, γ ) and s in P (say T ). Let ( S v , η v ) be the label of each vertex v in T . By converting the label ( S v , η v ) to( S v × { s } , η v ) for each vertex v , we get a synchroniser between ( I × { s } , γ ) and ( s, s ) in P .Now to the root of this synchroniser add a parent labelled by ( I × { (cid:44) } , γ ) with its outgoingedge labelled by decide s . Similarly, to each leaf v , add an outgoing edge labelled by done s and label the child of this edge by ( q acc , η v ). By inspection, it can be easily verified that thisnew tree is a synchroniser between ( I × { (cid:44) } , γ ) and q acc in P .( ⇐ ) Suppose there exists a synchroniser between ( I × { (cid:44) } , γ ) and q acc in P (say T ).Assume that T is a minimal such synchroniser. We recall our convention stated in Remark 22that if we did not state the image of δ ( q, a, A ) for some q, a, A , then δ ( q, a, A ) = ( q, A ).With this convention in mind, by using the fact that T is minimal, we can easily concludethat the root has exactly one outgoing edge labelled by decide s for some s ∈ Q and thechild of this edge is labelled by ( I × { s } , γ ). Hence, if we remove the root of T , we get asynchroniser between ( I × { s } , γ ) and q acc , which we will denote by T . By our assumptionon T , it follows that T is a minimal such synchroniser between ( I × { s } , γ ) and q acc in P .Let ( S v , η v ) be the label of the vertex v in the tree T . We now do a series of observations.No state in Q × { (cid:44) } has an incoming transition. Hence,Fact A: For every vertex v , S v ∩ ( Q × { (cid:44) } ) = ∅ .By our convention stated in remark 22, it follows that δ ( p, decide q , A ) = ( p, A ) for any p / ∈ Q × { (cid:44) } , A ∈ Γ and q ∈ Q . By fact A and by the minimality of T we get,Fact B: No edge of T is labelled by decide q for any q ∈ Q .Notice that q rej has no outgoing transitions. Since S u = { q acc } for any leaf u , it followsthatFact C: For every vertex v , q rej / ∈ S v .By induction on the structure of the tree T , we now prove thatFact D: For all non-leaves v , S v ⊆ Q × { s } . Further, if the outgoing edge from somevertex v is labelled by done q for some q ∈ Q , then q = s and v is the parent of aleaf with S v = { ( s, s ) } .Clearly S v ⊆ Q × { s } when v is the root vertex. Suppose for some non-leaf v , we have S v ⊆ Q × { s } . If the outgoing edges from v are labelled by some letter from Σ, then itis clear that for all children v of v we have S v ⊆ Q × { s } . By Fact B no edge can belabelled by decide q for any q ∈ Q . Hence, the only remaining case is when the outgoingedges of v are labelled by done q for some q ∈ Q . In this case, there is only one childof v (say v ). Since S v ⊆ Q × { s } , if done q = done s or if S v = { ( s, s ) } then rej ∈ S v which contradicts Fact C. Hence, S v = { ( s, s ) } , done q = done s and so S v = { q acc } . Byminimality of T , v must be a leaf.By combining all the facts it follows that if we remove the leaves of T , we get a synchron-iser between ( I × { s } , γ ) and ( s, s ) in P such that all the edges are labelled by letters fromΣ alone. Hence, if we remove the second co-ordinate s from each state in the label of eachvertex of T , we will get a synchroniser between ( I, γ ) and s in P .Therefore, we have shown that ∃ s ∈ Q with ( I, γ ) = ⇒ P s if and only if ( I ×{ (cid:44) } , γ ) == ⇒ P q acc .Hence, this gives the desired reduction from Subset-Ada-Sync to Given-Sync . Given-Sync is reducible to Subset-Ada-Sync
Let P = ( Q, Σ , Γ , δ ) with I ⊆ Q , s ∈ Q and γ ∈ Γ ∗ . The central idea is to take two disjointcopies of P and then add a new state q acc such that the only state reachable from both thesecopies of P is the state q acc . Further this state q acc can be reached only from the copiesof the state s . Then, if at all synchronisation is possible from both the copies of I in P ,it has to happen at q acc and so must go through the corresponding copies of s . Hence theprojection of this synchronisation on any of the copies will lead to a synchronisation from I to s in P .This idea can be concretely implemented as follows: The states of P will be Q =( Q × { , } ) ∪ { q acc , ( q rej , , ( q rej , } . The input alphabet of P will be Σ ∪ { end } . Uponreading any letter from Σ, the transitions on a state ( q, b ) ∈ Q × { , } just mimic thetransitions of δ on the first co-ordinate and leave the second one unchanged. Upon reading end , the states ( s, s,
1) and q acc move to q acc and all the other states move to theircorresponding copy of q rej . Notice that P is deterministic if P is. We now claim that( I, γ ) = ⇒ P s iff there exists q ∈ Q such that ( I × { , } , γ ) == ⇒ P q .( ⇒ ) Suppose there is a synchroniser between ( I, γ ) and s in P (say T ). Let ( S v , η v ) bethe label of every vertex v in T . Modify T as follows: Replace ( S v , η v ) with ( S v × { , } , η v )for every vertex v . Then to each leaf v , add an outgoing edge labelled by end and label thechild of this edge by ( q acc , η v ). By inspecting the transition relation, it can be easily verifiedthat this modified tree is a synchroniser between ( I × { , } , γ ) and q acc in P .( ⇐ ) Suppose there exists q ∈ Q such that ( I × { , } , γ ) == ⇒ P q . Since q acc is the onlystate reachable from both I × { } and I × { } , it follows that q = q acc . Hence, we havea synchroniser between ( I × { , } , γ ) and q acc in P (say T ). We can assume that T is aminimal such synchroniser. Let ( S v , η v ) be the label of each vertex v in T . We now do aseries of observations.Notice that ( q rej ,
0) and ( q rej ,
1) both have no outgoing transitions. Since S u = { q acc } forany leaf u , it follows thatFact A: For every vertex v , ( q rej , / ∈ S v and ( q rej , / ∈ S v .By induction on the structure of the tree T , we now prove thatFact B: For all non-leaf vertices v , there exists Q v ⊆ Q such that S v = Q v × { , } .Further, if the outgoing edge from some vertex v is labelled by end , then v is theparent of a leaf with S v = { s } × { , } . . R. Balasubramanian and K. S. Thejaswini 17 Clearly S v = I ×{ , } when v is the root. Suppose for some non-leaf vertex v , there exists Q v such that S v = Q v × { , } . If the outgoing edges from v are labelled by some letterfrom Σ, then it is clear that for all children v of v there exists Q v with S v = Q v × { , } .Hence, the only remaining case is when the outgoing edges of v are labelled by end . Inthis case, there is only one child of v (say v ). If Q v = { s } then ( q rej , , ( q rej , ∈ S v which contradicts Fact A. Hence, Q v = { s } and so S v = { s } × { , } . Therefore, it followsthat S v = { q acc } . By minimality of T , v must be a leaf.It then follows that if we remove the leaves of T , we get a synchroniser between ( I ×{ , } , γ )and s × { , } such that all the edges are labelled by letters from Σ alone. Hence, if we pro-ject the labels of each vertex on the first copy, we get a synchroniser between ( I, γ ) and s in P .Therefore we have shown that ( I, γ ) = ⇒ P s iff there exists q ∈ Q such that ( I ×{ , } , γ ) == ⇒ P q . This gives the desired reduction from Given-Sync to Subset-Ada-Sync . (cid:74)(cid:73) Proposition 24.
Given-Sync is poly. time equivalent to
Super-Sync . Further, the sameis true for the corresponding deterministic versions.
Proof.
Given-Sync is reducible to Super-Sync
Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q , s ∈ Q and γ ∈ Γ ∗ . Construct P from P byadding two new states q acc and q rej and two new input letters end and pop . Upon inputting end the states s and q acc move to q acc whereas all the other states move to q rej . Uponinputting pop , all the states except q acc move to q rej whereas q acc remains at q acc and keepson popping the stack. Notice that P is deterministic if P is. We now claim that( I, γ ) = ⇒ P s if and only if ( I, γ ) sup == ⇒ P q acc ( ⇒ ) Let T be a synchroniser between ( I, γ ) and s in P and let ( S v , η v ) be the label ofeach vertex v . From each leaf v , add an outgoing edge labelled by end and label the childof this edge by ( q acc , η v ). Let η v = w w . . . w k ⊥ . Now, add a chain of k + 1 vertices fromthis child with each edge labelled by pop such that the i th vertex in the chain is labelled by( q acc , w i w i +1 , . . . , w k , ⊥ ). It is clear that the new tree is a super-synchroniser between ( I, γ )and q acc in P .( ⇐ ) Let T be a super-synchroniser between ( I, γ ) and q acc in P . We can assume that T is a minimal such super-synchroniser. Let ( S v , η v ) be the label of each vertex v in T . Wenow do a series of observations.Because there are no outgoing transitions from q rej and since S u = { q acc } for every leaf u , we have,Fact A: For every vertex v , q rej / ∈ S v .We now claim that,Fact B: Along every branch of T , there is a vertex v with only one child v such that S v = { s } , the outgoing edge from v is labelled by end and S v = { q acc } . Further,for every vertex v before v in this branch, we have S v ⊆ Q and no edge beforethe edge ( v, v ) along this branch is labelled by end . Let us consider a branch of the tree T . We will say that v ≤ v for two vertices along thisbranch if v appears before v along this branch. Now, the root of the branch is labelledby ( I, γ ) whereas the leaf is labelled by ( q acc , ⊥ ). Hence, there should be a vertex v andits child v along this branch such that q acc ∈ S v but q acc / ∈ S v for every vertex v ≤ v .By Fact A, q rej / ∈ S v for every vertex v ≤ v and so S v ⊆ Q for every v ≤ v . Further,if the outgoing edge from some vertex v < v is labelled by end , then the child of v along this branch will contain either q acc or q rej , which will lead to a contradiction.Now the only way to move from some state in Q to q acc is by the letter end . Hence theedge between v and v must be labelled by end . Now if S v = { s } then q rej ∈ S v whichcontradicts Fact A. Hence S v = { s } and S v = { q acc } .Hence, using Fact B, we proceed to cut the tree T as follows: Along every branch, findthe vertex v as guaranteed by Fact B and then remove all the vertices after v along thisbranch. It follows that this reduced tree will be a synchroniser between ( I, γ ) and s in P .Therefore we have shown that ( I, γ ) = ⇒ P s if and only if ( I, γ ) sup == ⇒ P q acc . Hence we getthe desired reduction from Given-Sync to Super-Sync . Super-Sync is reducible to Given-Sync
Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q , s ∈ Q and γ ∈ Γ ∗ . Construct P from P byadding two new states q acc and q rej and one new input letter end . Upon inputting end , thestate q acc remains at q acc , the state s moves to q acc if the stack is empty and in all the othercases, P moves to q rej . Notice that P is deterministic if P is. We now claim that( I, γ ) sup == ⇒ P s if and only if ( I, γ ) == ⇒ P q acc ( ⇒ ) Suppose T is a super-synchroniser between ( I, γ ) and s in P . Let ( S v , η v ) be thelabel of each vertex v . To every leaf of T , add an outgoing edge labelled by end , and labelthe child of this edge by ( q acc , ⊥ ). It follows that this new tree is a synchroniser between( I, γ ) and q acc in P .( ⇐ ) Suppose T is a synchroniser between ( I, γ ) and q acc in P . We can assume T is aminimal such synchroniser. Let ( S v , η v ) be the label of each vertex v . We now do a series ofobservations.Since there are no outgoing transitions from q rej , and since S u = { q acc } for every leaf u ,it follows thatFact A: For every vertex v , q rej / ∈ S v .By induction on the structure of T , we claim that,Fact B: If v is a non-leaf, then S v ⊆ Q . Further, if the outgoing edge from somevertex v is labelled by end , then v is the parent of a leaf with ( S v , η v ) = ( s, ⊥ ).Clearly S v ⊆ Q when v is the root. Suppose for some non-leaf v , S v ⊆ Q . If the outgoingedges from v are labelled by some letter from Σ, then it is clear that for all children v of v , S v ⊆ Q . Hence, the only remaining case is when the outgoing edges of v are labelledby end . In this case, there is only one child of v (say v ). If S v = { s } or if η v = ⊥ , then q rej ∈ S v which contradicts Fact A. Hence, S v = { s } , η v = ⊥ and so S v = { q acc } . Byminimality of T , v must be a leaf. . R. Balasubramanian and K. S. Thejaswini 19 Hence, if we remove all the leaves of T we get a super-synchroniser between ( I, γ ) and s in P .Therefore, we have shown that ( I, γ ) sup == ⇒ P s iff ( I, γ ) == ⇒ P q acc . This gives the desiredreduction from Super-Sync to Given-Sync . (cid:74)(cid:73) Proposition 25.
Super-Sync and
Special-Sync are poly. time equivalent. Further thesame is true for the corresponding deterministic versions.
Proof.
It suffices to show that
Super-Sync is reducible to
Special-Sync as the latter is aspecial case of the former. Let P = ( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q , s ∈ Q and γ ∈ Γ ∗ .Construct P from P by adding a new set of states I = { q : q ∈ I } such that upon inputtingany letter, the state q ∈ I pushes γ onto the stack and moves to q . It is obvious that( I, γ ) sup == ⇒ P s iff ( I , ⊥ ) sup == ⇒ P s . (cid:74) B Proofs of Section 4 (cid:73)
Lemma 6.
The emptiness problem for AEPS is - EXPTIME -hard.
Proof.
We show that the acceptance problem for alternating Turing machines with exponen-tial space can be reduced to the emptiness problem for AEPS. Since alternating exponentialspace machines correspond to deterministic doubly-exponential time machines, it would thenfollow that the emptiness problem is 2-
EXPTIME -hard.More specifically, we are given an one-tape alternating Turing machine M , a word w anda number B encoded in binary, and the problem is to decide if M accepts w whilst using atmost B tape cells. The reduction that we present here is similar to the reductions given inTheorem 5.4 of [5] (and also Prop. 31 of [22]), to prove that the emptiness problem for AEPSwithout any Boolean variables is EXPTIME -hard. The only additional insight that we havehere is that by using the Boolean variables in an AEPS, one can push exponentially manysymbols onto the stack in a single path before cycling back to some state. This is becauseusing the tests and commands of an AEPS, once can implement a ’counter’ which can countup to some exponential value. If V denotes the set of variables in an AEPS, then we canstore a number between 0 and 2 | V | − x , . . . , x ‘ are thevalues of the variables v , . . . , v ‘ at some point, then, at that point the values of the variables V denote the number whose binary representation is given by x := x · x · · · x ‘ where x isthe most significant bit and x ‘ , the least significant bit. Using some tests and commands, itis easy to see that one can implement operations which would effectively perform additionby 1, or checking for equality to a specific value, say 2 | V | .Let Q be the states of M , Σ be the tape alphabet. A configuration of M will be denotedby the string wqw where w, w ∈ Σ ∗ , q ∈ Q and where the head of the machine is always tothe right of the control state. Let δ be the transition relation of M where transitions are ofthe form: ( q, a ) → { ( q , a , d ) , . . . , ( q k , a k , d k ) } where q, q , . . . , q k ∈ Q , a, a , . . . , a k ∈ Σ and d , . . . , d k ∈ { left , right } . We note that theexistential branching of the alternation is captured by non-deterministically choosing atransition applicable at each configuration and the universal branching is captured by forkinginto many copies as specified by the chosen transition.We now construct an AEPS A which will guess and verify an accepting run of the machine M on the input w . A will operate in two stages. In the first stage, it guesses an accepting run of M . In the second stage it verifies that this guess is indeed a valid run of the machine M .The machine A will have ‘ = d log ( B + 2) e many Boolean variables. As mentioned before,using these Boolean variables we can have a bounded-counter which will enable us to countup till B + 1 and also allow us to check if the counter value at any point is equal to somespecific value (say something like B + 1 or B + 2). We will, in the description of the machine,use phrases like, ‘checks if a value is x ’, ‘pushes y onto the stack x many times’, to denotecounting using the Boolean variables V . The first stage
Using the Boolean variables and appropriately designed tests and commands, A will havetransitions which will allow it to push, in a non-deterministic manner, exactly B + 1 lettersfrom the set Q ∪ Σ onto the stack. Additionally A also ensures, using the finite control,that the word pushed is of the form w · q · w where w, w ∈ Σ ∗ and q ∈ Q . Once such aword has been pushed into the stack, the variables V are reset to 0 and we note that atthis point, A has pushed a configuration of the machine M onto the stack. Note that inits finite control, A remembers the state q that it pushed into the stack and the letter a that it pushed after q . Now, A non-deterministically picks a transition of M of the form( q, a ) → { ( q , a , d ) , . . . , ( q k , a k , d k ) } and then forks into k copies of itself, with the i th copypushing the symbol ( q i , a i , d i ) into the stack. After this, it repeats this whole process againof trying to push a configuration and a transition of M onto the stack. The first stage endswhen A pushes an accepting configuration of M onto the stack, i.e., a configuration wherethe state is an accepting state of M .Note that at this point, in each of the forked copies of A , the stack has a sequence of theform c ( q , a , d ) c ( q , a , d ) . . . ( q k , a k , d k ) c k where each c i is a configuration of M and c k is an accepting configuration of M The second stage A verifies that the guessed sequence is indeed a valid run of M . If c i is the configuration atthe top of the stack, A forks into two copies, with the first copy deciding to verify that theconfiguration c i follows from the configuration c i − using the move ( q i , a i , d i ) and the secondcopy deciding to pop the configuration c i and ( q i , a i , d i ) from the stack and recursively doinga similar fork to verify the run from the configuration c i − .To verify that c i follows from c i − using ( q i , a i , d i ), the first copy of A proceeds as follows:It forks into two copies, with the first copy deciding to check that the current letter of c i atthe top of the stack follows correctly from the configuration c i − using ( q i , a i , d i ) and thesecond copy popping the letter at the top of the stack and then recursively forking to do asimilar choice for the next letter of c i . The first copy remembers the letter at the top of thestack, pops this letter and then using the bounded-counter removes B + 1 symbols from thestack, and on the way to popping these B + 1 symbols, also remembers the move ( q i , a i , d i )that it pops. After having removed these B + 1 symbols, it then pops the next four lettersfrom the stack, remembers all these four letters, and using the six pieces of information in thefinite control that it has remembered, checks the consistency of these four letters along withthe letter from the configuration c i . If this check succeeds, then A moves to an acceptingstate, else it moves to a rejecting state. . R. Balasubramanian and K. S. Thejaswini 21 Finally, when a copy of A ends up popping everything on the stack except for theconfiguration c , it checks if this configuration is an initial configuration of the machine M ,i.e., it checks if it is of the form qw B − n where M and n isthe size of the input w . To do this, it keeps on popping the stack till a non-blank symbol isreached, and then using its finite control, checks that the remaining portion in the stack is ofthe form qw .It is clear from the description that such an alternating extended pushdown system A can be constructed in polynomial time. (cid:74)(cid:73) Lemma 7.
The emptiness problem for NEPS is
EXPTIME -hard.
Proof.
We will give a reduction from the problem of checking if an alternating linearlybounded Turing machine M accepts a word w , i.e., whether an alternating Turing machine M accepts a word w whilst using at most n = | w | tape cells.The proof of this can be seen as an adaptation of the proof of Lemma 6 for alternatingTuring machines that use linear space instead of exponential space. But now, since there isno alternation at the disposal of the machine, it instead simulates all possible branches ofthe alternating linear-space Turing machine.We assume that the Turing machine M has a finite set of states Q and tape alphabet Σ. Aconfiguration of the machine is represented by a word in Σ ] Q and is of the form wqw where w, w ∈ Σ ∗ , q ∈ Q and the head of the machine is always to the right of the control state. Thetransitions of M are of the form ( q, a ) → { ( q , a , d ) , . . . , ( q k , a k , d k ) } . As discussed before,the existential branching of the alternation is captured by non-deterministically choosing atransition applicable at each configuration and the universal branching is captured by forkinginto many copies as specified by the chosen transition.Note that a run of an alternating linearly bounded Turing machine can be represented bya tree where the nodes are labelled by configurations. We now construct an NEPS A , which,roughly speaking, will explore this tree in a DFS order. Unlike in the proof of Lemma 6,here instead of non-deterministically pushing a configuration and later verifying it, with thehelp of polynomially many counters, A ’remembers’ a configuration and ensures that thenext configuration pushed respects the transition that is chosen to be executed.The machine A has d log( | Σ | + | Q | ) e ( n + 1) many Boolean variables. These variables areused to encode a configuration of the Turing machine as follows: The first d log( | Σ | + | Q | ) e letters are used for the first letter of the configuration, the next d log( | Σ | + | Q | ) e for the nextand so on. The machine A has three modes: The initial, forward and reverse mode.In the initial mode , A pushes the initial configuration c = q · w into the stack, one byone letter at a time Moreover, it remembers the first letter a of w in its finite control andalso ensures that c is also encoded in the Boolean variables using the encoding describedabove.Once an initial configuration is pushed in the stack, non-deterministically, a letter of theform ( t,
1) where t = ( q , a ) → { ( q , a , d ) , . . . , ( q k , a k , d k ) } is pushed. The t is chosennon-deterministically. It then proceeds to the forward mode.In the forward mode , suppose the top of the stack is of the form ( t, i ) for t = ( q, a ) →{ ( q , a , d ) , . . . , ( q k , a k , d k ) } and the contents of the Boolean variables denotes a con-figuration c . Then, from the Boolean variables, A gets the state and the three lettersaround the head of the configuration c . Using these pieces of information, it updates thevalues of the Boolean variables encoding the state and the three letters around the headaccording to the transition ( q, a ) → ( q i , a i , d i ). Having done this, the Boolean variablesnow encode a new configuration c . A then pushes this configuration c to the stack. After pushing a configuration, the machine then pushes a letter of the form ( t,
1) where t is chosen non-deterministically, out of the set of transitions that are possible from theconfiguration c stored in the Boolean variables.If a configuration pushed contains a final state, then A goes into reverse mode definedbelow where the following happens:It pops elements from the stack until it reaches a letter of the form ( t, i ) for t =( q, a ) → { ( q , a , d ) , . . . , ( q k , a k , d k ) } or the bottom of the stack symbol.If the bottom of the stack is reached, A has completed its DFS traversal of the acceptingtree and reaches an accept state.If i = k , then it pops ( t, i ) and the configuration below it and continues to be in thereverse mode.If i < k , then it pops ( t, i ), remembers it in its finite control, pops the next n +1 symbolsand stores the corresponding configuration that it pops in the Boolean variables. Laterit then pushes the same configuration onto the stack and then pushes ( t, i + 1) ontothe top of the stack and proceeds into forward mode.The above NEPS simulates a run-tree of an alternating linearly bounded Turing machineand can be encoded in size that is polynomial in |M| and | w | , showing that emptiness ofNEPS is EXPTIME -hard. (cid:74)
B.1 Proof of Reduction From Alternating Extended PushdownSystems to Special-Sync (cid:73)
Theorem 9.
Special-Sync , Subset-Ada-Sync and
Ada-Sync are all - EXPTIME -hard.
Det-Special-Sync , Det-Subset-Ada-Sync and
Det-Ada-Sync are all
EXPTIME -hard.
Proof.
We now present the reduction from the emptiness problem for AEPS to
Special-Sync in more detail.Let A = ( Q, V, Γ , ∆ , init , fin ) be an AEPS. Without loss of generality, we shall as-sume that if ( q, A, G ) , → { ( q , γ , C ) , ( q , γ , C ) , . . . , ( q k , γ k , C k ) } ∈ ∆, then γ i = γ j for i = j . This is because, if it happens that (say) γ = γ , then we introduce anew stack symbol q and then replace this transition with ( q, A, G ) , →{ ( q , γ , C ) , ( q , γ , C ) , . . . , ( q k , γ k , C k ) } , ( q , , ∅ ) , → { ( q , (cid:15), ∅ ) } . Similarly we can intro-duce additional states if equality holds for other indices as well. Having made this assumption,the desired reduction is described below.We now construct a pushdown automaton P as follows: The stack alphabet of P willbe Γ. For each transition t ∈ ∆, P will have an input letter in ( t ). P will also have anotherinput letter end . The state space of P will be the set Q ∪ ( V × { , } ) ∪ { q acc , q rej } , where q acc and q rej are two new states, which on reading any input letter, will leave the stackuntouched and simply stay at q acc and q rej respectively.Now we describe the transitions of P . Let t = ( q, A, G ) , → { ( q , γ , C ) , . . . , ( q k , γ k , C k ) } be a transition of A . Let p ∈ Q . Upon reading in ( t ), if p = q then p immediately moves tothe q rej state. Further, even state q moves to the q rej state if the top of the stack is not A .However, if the top of the stack is A , then q pops A and non-deterministically pushes anyone of γ , . . . , γ k onto the stack and if it pushed γ i , then q moves to the state q i .Let ( v, b ) ∈ V × { , } . Upon reading in ( t ), if the test v ? = (1 − b ) appears in the guard G , then ( v, b ) immediately moves to the q rej state. (Notice that this is a purely syntacticalcondition on A ). Further, if the top of the stack is not A , then once again ( v, b ) moves to q rej . If these two cases do not hold, then ( v, b ) pops A and non-deterministically picks an . R. Balasubramanian and K. S. Thejaswini 23 i ∈ { , . . . k } and pushes γ i onto the stack. Having pushed γ i , if C i does not update thevariable v , it stays in state ( v, b ); otherwise if C i has a command v b , it moves to ( v, b ).Finally, upon reading end , the states in { fin } ∪ { ( v,
0) : v ∈ V } move to the q acc stateand all the other states in Q ∪ ( V × { , } ) move to the q rej state. This ends our constructionof P .Given an assignment F : V → { , } of the Boolean variables V , and a state q of A , weuse the notation [ q, F ] to denote the subset { q } ∪ { ( v, F ( v )) : v ∈ V } of states of P . We nowanalyse some basic properties of the constructed automaton P .By construction of P , it is easy to see that, Fact A:
Suppose t is a transition of A which is not enabled at the configuration( q, Aγ, F ). Then, upon reading in ( t ), there is at least one possible successor ( S, η )of the pseudo-configuration ([ q, F ] , Aγ ) such that q rej ∈ S .Indeed, suppose t = ( p, B, G ) , → { ( p , γ , C ) , . . . , ( p k , γ k , C k ) } is a transition of A whichis not enabled at ( q, Aγ, F ). Either q = p , in which case the state q moves to q rej in P ; Or A = B , in which case all the states in [ q, F ] move to q rej in P ; Or for some variable v , thevalue F ( v ) does not satisfy some guard in G , which can happen iff the test v ? = 1 − F ( v )appears in G , in which case the state ( v, F ( v ))) moves to q rej in P . This proves Fact A.Now, recall that if ( q, A, G ) , → { ( q , γ , C ) , . . . , ( q k , γ k , C k ) } is a transition in A , then γ i = γ j for any i = j , With this in mind, the following fact is rather immediate to see Fact B:
Suppose the configuration ( q, Aγ, F ) forks into the configurations ( q , γ γ, F ) , . . . , ( q k , γ k γ, F k ) using the transition t in the AEPS A . Then, the possible successorsfrom the pseudo-configuration ([ q, F ] , Aγ ) upon reading in ( t ) in the PDA P are([ q , F ] , γ γ ) , . . . , ([ q k , F k ] , γ k γ ).Using these 2 facts, we now claim that:There exists an accepting run from a configuration ( q, η, H ) in A iff there exists asuper-synchroniser between ([ q, H ] , η ) and q acc in P .( ⇒ ) Suppose there is an accepting run from a configuration ( q, η, H ) in A . We provethe claim by induction on the size of the accepting run. For the base case of 1, it must bethe case that ( q, η, H ) = ( fin , ⊥ , ). In this case, by inputting the letter end , it is clear thatthere is a super-synchroniser between ([ fin , ] , ⊥ ) and q acc in P .Suppose we have an accepting run T of size m +1 from the configuration ( q, η, H ) in A . Theroot of T is labelled by ( q, η, H ). Suppose its children are labelled by ( q , γ , F ) , . . . , ( q k , γ k , F k ).By induction hypothesis, for each 1 ≤ i ≤ k , we have a super-synchroniser between ([ q i , F i ] , γ i )and q acc in P . By Fact B, it follows that we then have a super-synchroniser between ([ q, H ] , η )and q acc .( ⇐ ) Suppose there exists a super-synchroniser (say T ) between ([ q, H ] , η ) and q acc in P .Without loss of generality, we can assume that only the leaves of T are labelled by ( { q acc } , ⊥ ).Let ( S n , γ n ) be the label of each node n in T . We now proceed to make some observations.Since there are no outgoing transitions out of q rej and since S n = { q acc } for every leafnode n , it follows that Fact C:
For every node n , q rej / ∈ S n .By induction on the structure of the tree T , we prove that Fact D: If n is a non-leaf node, then S n = [ p n , F n ] for some p n ∈ Q and some F n : V → { , } . Further, if the outgoing edge from some node n is labelled by end ,then n is the parent of a leaf with ( S n , γ n ) = ([ fin , ] , ⊥ ). If n is the root node, then clearly S n = [ q, H ] and so satisfies the claim. Suppose forsome non-leaf node n , S n = [ p n , F n ]. Suppose the outgoing edges from n are labelled bysome letter in ( t ). If t is not enabled at the configuration ( p n , F n , γ n ) in A , by Fact A,there is at least one child n of n with q rej ∈ S n , which contradicts Fact C. Hence, t mustbe enabled at ( p n , F n , γ n ) in A . By Fact B, it is then clear that for all children n of n , S n is also of the form [ p n , F n ] for some p n ∈ Q and F n : V → { , } .Hence, the only remaining case is when the outgoing edges of n are labelled by end . Inthis case, there is only one child of n (say n ). If p n = fin or if F n = , then q rej ∈ S n which contradicts Fact C. Hence, S n = [ fin , ] and so S n = { q acc } . Notice that if γ n is not ⊥ , then γ n is also not ⊥ . No transition of q acc pops the stack and there is nooutgoing transition from q acc , and so it would follow that no leaf in the subtree of n islabelled by ( q acc , ⊥ ), which is a contradiction. Hence, γ n = ⊥ and so γ n = ⊥ . Since( S n , γ n ) = ( q acc , ⊥ ), n is a leaf. Hence, Fact D is true.By Facts A, B, C and D, it then follows that if we take T , remove all its leaves and changethe label ([ p n , F n ] , γ n ) of each node n to the label ( p n , γ n , F n ), we get an accepting run of A .Hence there is an accepting run of A iff there is a super-synchroniser between ([ init , ] , ⊥ )and q acc in P . Notice that P is deterministic if A is non-deterministic. Hence, by Lemmas 6and 7, we get the required claims. (cid:74) C Proofs of Section 5
Throughout this section, we fix a single PDA P = ( Q, Σ , Γ , δ ) with I ⊆ Q and s ∈ Q . Thisgives rise to the alternating pushdown system A P = (2 Q , Γ , ∆ , I, { s } ). (cid:73) Proposition 12.
Suppose S ⊆ Q, a ∈ Σ , A ∈ Γ and suppose E , . . . , E k are the equivalenceclasses of ∼ aS,A . Then, | S | ≥ P ki =1 | next ( E i ) | . Proof.
By definition T aS,A = { t ∈ δ : t = ( p, a, A, q, γ ) where p ∈ S } . Because P is determin-istic, the size of T aS,A is | S | . Now, the relation ∼ aS,A partitions T aS,A into equivalence classes E , . . . , E k and for each i , next ( E i ) is simply the set { q : ( p, a, A, q, γ ) ∈ E i } . Since | T aS,A | is | S | , it follows that P ki =1 | next ( E i ) | is at most | S | . (cid:74)(cid:73) Lemma 13.
For any S ∈ Q , γ ∈ Γ ∗ , any accepting run of A P from the configuration ( S, γ ) has at most | S | leaves. Proof.
Let T be any accepting run of ( S, γ ) = (
S, Aη ). We proceed by induction on the sizeof T . The base case of 1 is trivial. For the induction step, suppose the size of T is m + 1 forsome m ≥
0. Let v , . . . , v k be the children of the root. By nature of the transitions in A P ,it follows that there exists a ∈ Σ and equivalence classes E , . . . , E k of ∼ aS,A such that v i islabelled by ( next ( E i ) , word ( E i ) η ). By induction hypothesis, the sub-tree rooted at v i has atmost | next ( E i ) | leaves. By proposition 12 we have that P ki =1 | next ( E i ) | ≤ | S | . Hence thetotal number of leaves of the tree T is at most | S | . (cid:74) C.1 Proofs for subsection 5.3
Let us fix an alternating pushdown system A = ( Q, Γ , ∆ , init , fin ) and a number k . From A ,we can derive a non-deterministic pushdown system obtained by deleting all transitions ofthe form ( q, A ) , → { ( q , γ ) , . . . , ( q k , γ k ) } with k >
1. We will denote this NPS by N . . R. Balasubramanian and K. S. Thejaswini 25 (cid:73) Lemma 19.
There is a k -accepting run of A from a configuration ( p, η ) iff there is a k -compressed accepting run of A from ( p, η ) . Proof. ( ⇒ ) : Suppose we have a k -accepting run of A from ( p, η ), say T . Let us proceed byinduction on | T | . If | T | = 1, we are done. Otherwise, let r be the root of T . If r is a simplevertex, then let v be the unique closest descendant of r such that v is complex (Such a vertexalways exists by means of the definition of simple and complex). Note that the sub-treerooted at v has also k leaves. If we let ( p , η ) be the label of v , by induction hypothesisthere is a k -compressed accepting run from ( p , η ), say T . Now, take T , and add ( p, η ) as aparent to ( p , η ) in T . By definition, this then gives rise to a k -compressed accepting runfrom ( p, η ).If r is a complex vertex, let v , . . . , v m be the children of r such that the label of each v i is ( p i , η i ) and the sub-tree rooted at v i has ‘ i leaves. By induction hypothesis, for each i ,there is a ‘ i -compressed accepting run from ( p i , η i ). Taking all these trees and adding ( p, η )as their root, gives rise to a k -compressed accepting run from ( p, η ).( ⇐ ) : Suppose we have a k -compressed accepting run of A from ( p, η ), say T . Let usproceed by induction on | T | . If | T | = 1, we are done. Otherwise, let r be the root of T .If r is a simple vertex, then let v be the only child of r and let the label of v be ( p , η ).By definition of the k -compressed accepting run T , we have a run ( p, η ) −→ N ( p , η ) −→ N ( p , η ) . . . ( p m , η m ) −→ N ( p , η ). By induction hypothesis, we have a k -accepting run of A from( p , η ), say T . Now, take T and attach the linear chain of vertices ( p, η ) , ( p , η ) , . . . , ( p m , η m )before its root. This gives rise to a k -accepting run of A from ( p, η ).If r is a complex vertex, let v , . . . , v m be the children of r such that the label of each v i is ( p i , η i ) and the sub-tree rooted at v i has ‘ i leaves. By induction hypothesis, for each i , there is a ‘ i -accepting run from ( p i , η i ). Taking all these trees and adding ( p, η ) as theirroot, gives rise to a k -accepting run from ( p, η ). (cid:74)(cid:73) Proposition 21.
For each vertex v of the tree T , M v satisfies invariant (*) Proof.
Recall that Invariant (*) was the following:Invariant (*) : A configuration ( q, γ ) ∈ C ( M v ) iff all the vertices of the sub-tree rootedat v can be labelled such that the resulting labelled sub-tree is a compressed acceptingrun of A from ( q, γ ).Let us proceed by induction on the structure of the tree T . By construction, the invariantis true for all leaves v . Now, suppose we have a simple vertex v . Let u be its only child.By induction hypothesis, assume that the invariant is true for u . By construction, M v isan automaton such that C ( M v ) = { ( q , γ ) : ∃ ( q, γ ) ∈ C ( M u ) such that ( q , γ ) ∗ −→ N ( q, γ ) } . Itthen immediately follows that the invariant is satisfied for the vertex v as well.Suppose we have a complex vertex v and let v , . . . , v ‘ be its children. Suppose ( p, Aγ ) ∈C ( M v ). By construction of M v , it then follows that there exists a transition ( p, A ) , →{ ( p , γ ) , . . . , ( p ‘ , γ ‘ ) } of A such that for each i , the configuration ( p i , γ i γ ) ∈ C ( M v i ). Byinduction hypothesis, for each i , the sub-tree rooted at v i can be labelled so that the resultinglabelled sub-tree is a compressed accepting run from ( p i , γ i γ ). By taking this labelling foreach of the sub-trees rooted at v , . . . , v ‘ and then labelling the vertex v by ( p, Aγ ), weget a labelling of the sub-tree rooted at v which is a compressed accepting run from theconfiguration ( p, Aγ ). Conversely, suppose for some configuration ( p, Aγ ), it is possible to label the sub-treerooted at v so that it becomes a compressed accepting run from the configuration ( p, Aγ ).Hence, there exists a transition ( p, A ) , → { ( p , γ ) , . . . , ( p ‘ , γ ‘ ) } of A such that for each i ,the label of v i under this labelling is ( p i , γ i γ ). By induction hypothesis, for each i , we havethat ( p i , γ i γ ) ∈ C ( M v i ). By construction of M v , it follows that ( p, Aγ ) ∈ C ( M v ). Hence, theinvariant is satisfied when v is a complex vertex as well. (cid:74) Running time analysis
Let us analyse the running time of
Check . Let T be a k -structured tree and therefore T has O ( k ) vertices. Check assigns to each vertex v of T an automaton M v . We claim that therunning time of Check is O ( k · |A| ck ) (for some fixed constant c ) because of the followingfacts:1) By induction on the structure of the tree T , it can be proved that, there exists a constant d , such that if h v is the height of a vertex v and l v is the number of leaves in the sub-treeof v , then the number of states of M v is O ( |A| dh v l v ) (Recall that h v l v is at most O ( k )).2) If an N -automaton has n states, then the number of transitions it can have is O ( |A| · n ).3) For a vertex v with children v , . . . , v ‘ , M v can be constructed in polynomial time in thesize of | M v | × | M v | × . . . | M v ‘ | and |A| .Notice that everything else apart from Fact 1) is easy to see. To prove Fact 1), weproceed by bottom-up induction on the structure of the tree T . For the base case when thevertex v is a leaf, notice that we can easily construct the required automaton M v with atmost O ( |A| ) states. Suppose, v is a simple vertex and u its only child. By Theorem 20, M v has the same set of states as M u . By induction hypothesis, the number of states of M u is O (cid:0) |A| dh u l u (cid:1) and so the number of states of M v is O (cid:0) |A| dh v l v (cid:1) . Suppose v is a complexvertex and v , . . . , v ‘ are its children. Let h be the maximum height amongst the vertices v , . . . , v ‘ . By induction hypothesis, the number of states of each M v i is O (cid:0) |A| dhl vi (cid:1) . It isthen clear that the number of states of M v is O (cid:16)Q ‘i =1 |A| dhl vi + |A| (cid:17) = O (cid:0) |A| dhl v + |A| (cid:1) = O (cid:0) |A| d ( h +1) l v (cid:1) = O (cid:0) |A| dh v l v (cid:1) .Now the final algorithm for Sparse-Empty simply iterates over all k -structured treesand calls Check on all of them. Since the number of k -structured trees is at most f ( k ) where f is an exponential function, it follows that the total running time is O (cid:16) f ( k ) · k · |A| ck (cid:17) = O ( |A| ek ) for some constant e . D Homing Problem
Intuitively, in the homing problem, there is an observer who has no knowledge of the currentstate of the PDA. The problem then asks if there is a strategy for the observer, to inputletters adaptively and narrow down the possible set of states the PDA is in to exactly onestate.We first define the notion of a homing word from a pseudo-configuration. Let P =( Q, Σ , Γ , δ ) be a PDA with I ⊆ Q and γ ∈ Γ ∗ We say that the pseudo-configuration (
I, γ )admits a homing word if there is a labeled tree T satisfying the following conditionsAll the edges are labelled by some input letter a ∈ Σ such that, for every vertex v , all itsoutgoing edges have the same label.The root is labelled by the pseudo-configuration ( I, γ ). . R. Balasubramanian and K. S. Thejaswini 27 Suppose v is a vertex which is labelled by the pseudo-configuration ( S, Aη ). Let a bethe unique label of its outgoing edges and let Succ ( S, Aη, a ) be of size k . Then v has k children, with the i th child labelled by the i th pseudo-configuration in Succ ( S, Aη, a ).For every leaf, there exists q ∈ Q and η ∈ Γ ∗ such that its label is ( { q } , γ ) for some γ ∈ Γ ∗ .Notice that any synchroniser from some pseudo-configuration ( I, γ ) to some state s is also ahoming word from ( I, γ ). For the automata in Figure 1, we see that the tree in Figure 2 is ahoming word from ( { , , , } , ⊥ ). In fact, just a subtree of the one in Figure 2, where weprune the tree as soon as a pseudo configuration with one state is reached is also a homingword from ( { , , , } , ⊥ ).The homing problem Homing is now defined as follows:
Given:
A PDA P = ( Q, Σ , Γ , δ ) and a word γ ∈ Γ ∗ Decide:
If there is a homing word from (
Q, γ )Similarly the subset homing problem
Subset-Homing is defined as:
Given:
A PDA P = ( Q, Σ , Γ , δ ), a subset I ⊆ Q and a word γ ∈ Γ ∗ Decide:
If there is a homing word from (
I, γ )By using the same reduction as given in Lemma 4, it follows that (cid:73)
Lemma 26.
Homing and
Subset-Homing are polynomial-time equivalent.
We now have the following lemma which relates the homing problem to the adaptivesynchronisation problem. (cid:73)
Lemma 27.
Homing is polynomial time equivalent to
Ada-Sync . Proof.
Reducing an instance of Homing to Ada-Sync
By Lemmas 4 and 5, it suffices to show that
Homing can be reduced to
Given-Sync .Let P = ( Q, Σ , Γ , δ ) and γ ∈ Γ ∗ . We construct a PDA P = ( Q , Σ , Γ , δ ) as follows: Q consists of all the states of P , along with two new states q acc and q rej . Σ is taken to beΣ ∪ { a q : q ∈ Q } . The transition relation δ contains all the transitions in δ and in additionhas the following new ones: The state q ∈ Q , upon reading a q moves to q acc and uponreading a p for some p = q moves to q rej . We now claim thatThere is a homing word in P from ( Q, γ ) iff there is a synchroniser between (
Q, γ )and q acc in P .( ⇒ ) : Let T be a homing word in P from ( Q, γ ). For each leaf v , do the following:Suppose ( { q } , η v ) is the label of the leaf v . Add an outgoing edge from v labelled by a q andlabel the child of this edge by ( q acc , η v ). It is clear that the modified tree is a synchroniserbetween ( Q, γ ) and q acc in P .( ⇐ ) : Let T be a synchroniser between ( Q, γ ) and q acc in P . We can assume that T is aminimal such synchroniser. For each vertex v , let ( S v , η v ) be the label of v in T . Since thereare no outgoing transitions from q rej and since S u = { q acc } for every leaf u , it follows that q rej / ∈ S v for any vertex v . We now claim that,For every non-leaf v , S v ⊆ Q . Further, if an outgoing edge of v is labelled by a q forsome q , then v is the parent of a leaf with S v = { q } . Clearly S v ⊆ Q when v is the root. Suppose for some non-leaf v , S v ⊆ Q . If the outgoingedges from v are labelled by some letter from Σ, then it is clear that for all children v of v we have S v ⊆ Q . Suppose the outgoing edge from v is labelled by a q for some q ∈ Q .Then, there is only one child of v (say v ). If S v = { q } , then q rej ∈ S v , which leads to acontradiction. Hence, S v = { q } and S v = { q acc } . By minimality of T , v is a leaf.Hence, it follows that if we remove the leaves of T , we get a homing word in P from ( Q, γ ).Hence, there is a homing word in P from ( Q, γ ) iff there is a synchroniser between (
Q, γ )and q acc in P . Therefore, we get that Homing is reducible to
Given-Sync . Reducing an instance of Ada-Sync to Homing
By Lemmas 4, 5 and 26, it suffices to show that
Given-Sync can be reduced to
Subset-Homing .Let P = ( Q, Σ , Γ , δ ) be a PDA with γ ∈ Γ ∗ , I ⊆ Q and s ∈ Q . We now construct P = ( Q , Σ , Γ , δ ) as follows:The states of P will be Q = ( Q × { , } ) ∪ { q acc , ( q rej , , ( q rej , } . The input alphabetof P will be Σ ∪ { end } . Upon reading any letter from Σ, the transitions on a state( q, b ) ∈ Q × { , } just mimic the transitions of δ on the first co-ordinate and leave the secondone unchanged. Upon reading end , the states ( s, s,
1) and q acc move to q acc and all theother states move to their corresponding copy of q rej .We now claim thatThere is a synchroniser from ( I, γ ) to s in P iff there is a homing word from ( I ×{ , } , γ )in P .( ⇒ ): Suppose T is a synchroniser from ( I, γ ) to s in P . Let ( S v , η v ) be the label of eachvertex v in T . Modify T as follows: For each vertex v , replace S v with S v × { , } . Further,to each leaf v of T , add an outgoing edge labelled with end with the child of this edge beinglabelled by the pseudo-configuration ( { q acc } , η v ). It is now easy to see that this modifiedtree T is a homing word from ( I × { , } , γ ) in P .( ⇐ ) : Suppose P has a homing word from ( I × { , } , γ ). Let T be a minimal suchhoming word. Let ( S v , η v ) be the label of each vertex v in T . We claim the following:For any non-leaf vertex v , there exists Q v ⊆ Q such that S v = Q v × { , } . Further,if the outgoing edge from some vertex v is labelled by end , then v is the parent of aleaf with S v = { s } × { , } .Let us prove this by induction on the structure of T . Clearly for the root vertex v , wehave Q v = I . Suppose for some non-leaf vertex there exists Q v ⊆ Q with S v = Q v × { , } .If the outgoing edges from v are labelled by some letter from Σ, then it is easy to see that forevery child v of v , there exists Q v ⊆ Q with S v = Q v × { , } . Suppose the outgoing edgefrom v is labelled by end . Hence, there is only one child of v (say v ). If Q v = { s } , it followsthat both ( q rej ,
0) and ( q rej ,
1) belong to S v . Notice that there are no outgoing transitionsfrom both of these states. Hence, for every vertex v on the subtree rooted at v , we wouldhave ( q rej , ∈ S v and ( q rej , ∈ S v , which would be a contradiction. Hence, Q v = { s } and so S v = { q acc } . By minimality of the tree T , it follows that v is a leaf.It then follows that if we remove the leaves of T and project the labels of each vertex v to the subset Q v , we will get a synchronising word from ( I, γ ) to s in P . . R. Balasubramanian and K. S. Thejaswini 29 Hence, there is a synchroniser from (
I, γ ) to s in P iff there is a homing word from( I ×{ , } , γ ) in P . Therefore, we get that Given-Sync is reducible to
Subset-Homing ..