Sequential Composition in the Presence of Intermediate Termination
aa r X i v : . [ c s . L O ] J un Submitted to:EXPRESS / SOS 2017 c (cid:13)
Jos Baeten, Bas Luttik & Fei YangThis work is licensed under theCreative Commons Attribution License.
Sequential Composition in the Presence of IntermediateTermination
Jos Baeten
CWIAmsterdam, the NetherlandsUniversity of Amsterdam,Amsterdam, the Netherlands
Bas Luttik
Eindhoven University of TechnologyEindhoven, The Netherlands [email protected]
Fei Yang
Eindhoven University of TechnologyEindhoven, The Netherlands [email protected]
The standard operational semantics of the sequential composition operator gives rise to unboundedbranching and forgetfulness when transparent process expressions are put in sequence. Due to trans-parency, the correspondence between context-free and push-down processes fails modulo bisimilar-ity, and it is not clear how to specify an always terminating half counter. We propose a revisedoperational semantics for the sequential composition operator in the context of intermediate termi-nation. With the revised operational semantics, we eliminate transparency. As a consequence, weestablish a correspondence between context-free processes and pushdown processes. Moreover, weprove the reactive Turing powerfulness of TCP with iteration and nesting with the revised operationalsemantics for sequential composition.
The integration of the concurrency theory and the classical theory of formal languages has been exten-sively studied in recent years [2]. A lot of notions in the classical theory have their counterparts in theconcurrency theory [14]. However, we still cannot conclude a complete correspondence for all the no-tions. As far as we are concerned, a major obstacle is the phenomenon of transparency of the sequentialcomposition operator in the presence of intermediate termination.Sequential composition is a standard operator in many process calculi. The functionality of thesequential composition operator is to concatenate the behaviours of two systems. It has been widely usedin many process calculi with the notation “ · ”. We illustrate its operational semantics by a process P · Q in TCP [1]. If the process P has a transition P a −→ P ′ for some action label a , then the composition P · Q has the transition P · Q a −→ P ′ · Q . Termination is an important behaviour for models of computation [1].Combining with sequential composition, we have two additional rules in the operational semantics. Thefirst one states that P · Q terminates if both P and Q terminates; and the second one states that if P terminates, and there is a transition Q a −→ Q ′ for some action label a , then we have the transition P · Q a −→ Q ′ . Together with the first rules, we are able to characterise the behaviour of systems consisting of twoconcatenated parts. The system may make a transition of the first part. If the first part is terminating,then the system may choose to skip the first part and make a transition of the second part. If both partsare terminating, then the combined system is also terminating.In this paper, we discuss a complication on the standard version of the operational semantics of thesequential composition operator. The complication is on the transparency caused by sequential composi-tion operator and termination [2]. A process expression is called transparent if it is terminating. Actually,it is then transparent in a “sequential context”. We have observed two disadvantages on transparency inthe following research problems. Sequential Composition inthe Presence ofIntermediate TerminationThe relationship between context-free processes and pushdown automaton has been extensively dis-cussed in the literature [3]. It has been shown that every context-free process can be specified by apushdown process modulo contra simulation. However, such result is no longer valid modulo rootedbranching bisimulation, which is a finer behavioural equivalence. By stacking unboundedly many trans-parent terms with sequential composition, we would get a transition system with unboundedly branchingon its behaviour. It was shown that such unboundedly branching behaviour cannot be specified by anypushdown process modulo rooted branching bisimulation [3]. In order to improve the result to a finernotion of behaviour equivalence, we need to eliminate the problem of unboundedly branching.Another problem is that transparency makes a stack of transparent process expressions forgetful. Anotion of reactive Turing machines (RTM) [4] was introduced as a model to integrate concurrency andcomputability. The transition systems associated with RTMs are called executable. We use the RTMas a criteria of absolute expressiveness of process calculi and interactive computation models [10, 11].A process calculus is called reactively Turing powerful if every executable transition system can bespecified. The process calculus TCP with iteration and nesting is Turing complete [5, 6]. Moreover, itfollows from the result in [6] that it is reactively Turing powerful if termination is out of consideration.However, it is not clear to us how to reconstruct the proof of reactive Turing powerfulness if terminationis considered. Due to the forgetfulness on the stacking of transparent process expressions, it is not clearto us how to define a counter that is always terminating which is crucial to establish the reactive Turingpowerfulness.In order to avoid the unwanted feature of unbounded branching and forgetfulness, we propose arevised operational semantics for the sequential composition operator. Intuitively, we disallow the transi-tion from the second component of a sequential composition if the first component is able to still performa transition. Thus, we avoid the problems mentioned above with the revised operator. We shall provethat every context-free process is bisimilar to a pushdown process and TCP with iteration and nesting isreactively Turing powerful modulo divergence-preserving branching bisimilarity without using recursivespecification in the revised semantics.The paper is structured as follows. We first introduce TCP with the standard version of sequentialcomposition in Section 2. Next, we discuss the complications caused by transparency in Section 3. Then,in Section 4, we propose the revised operational semantics of the sequential composition operator, andshow that rooted divergence-preserving branching bisimulation is a congruence. In Section 5, we revisitthe problem on the relationship between context-free processes and pushdown automaton, and show thatevery context-free process is bisimilar to a pushdown process in our revised semantics. In Section 6, weprove that TCP with iteration and nesting is a reactively Turing powerful in the revised semantics. InSection 7, we draw some conclusions and propose some future work. We start with introducing a notion of labelled transition systems which is used as the standard mathe-matical representation of behaviour. We consider transition systems with a subset of states marked withfinal states. We let A be a set of action symbols , and we extend A with a special symbol τ < A , whichintuitively denotes unobservable internal activity of the system. We shall abbreviate A ∪ { τ } by A τ . Definition 1. An A τ -labelled transition system is a tuple ( S , −→ , ↑ , ↓ ) , where1. S is a set of states ,2. −→ ⊆ S × A τ × S is an A τ -labelled transition relation , os Baeten, BasLuttik &FeiYang 3 ↑ ∈ S is the initial state, and4. ↓ ⊆ S is a set of terminating states. Next, we shall use the process calculus TCP that allows us to describe transition systems. Let C bea set of channels and D (cid:3) be a set of data symbols . For every subset C ′ ⊆ C , we define a special set ofactions I C ′ ⊆ A τ by: I C ′ = { c ? d , c ! d | d ∈ D (cid:3) , c ∈ C ′ } . The actions c ? d and c ! d denote the events that a datum d is received or sent along channel c . Fur-thermore, let N be a countably infinite set of names. The set of process expressions P is generated bythe following grammar ( a ∈ A τ , N ∈ N , C ∈ C ): P = | | a . P | P · P | [ P k P ] C ′ | P + P | N . We briefly comment on the operators in this syntax. The constant denotes deadlock , the unsuccess-fully terminated process. The constant denotes termination , the successfully terminated process. Foreach action a ∈ A τ there is a unary operator a . denoting action prefix; the process denoted by a . P cando an a -labelled transition to the process P . The binary operator + denotes alternative composition orchoice. The binary operator [ k ] C deviates from TCP in [1] which denotes a special kind of parallelcomposition. It enforces communication along the channels in C , and communication results in τ . Thebinary operator · represents the sequential composition of two processes.Let P be an arbitrary process expression; and we use an abbreviation inductively defined by:1. P = ; and2. P n + = P · P n for all n ∈ N .A recursive specification E is a set of equations E = { N def = P | N ∈ N , P ∈ P} . satisfying the requirements that1. for every N ∈ N it includes at most one equation with N as left-hand side, which is referred to asthe defining equation for N ; and2. if some name N ′ occurs in the right-hand side P of some equation N = p in E , then E must includea defining equation for N ′ .A recursive specification is guarded if every summand in the specification that is not is in the scope ofsome action prefix.We use structural operational semantics to associate a transition relation with process expressions de-fined in TCP. We let −→ be the A τ -labelled transition relation induced on the set of process expressionsby operational rules in Figure 1. Note that we presuppose a recursive specification E .Here we use P a −→ P ′ to denote an a -labelled transition ( P , a , P ′ ) ∈ −→ . We say a process expression P ′ is reachable from P is there exists process expressions P , P , . . . , P n and labels a , . . . , a n , such that P = P a −→ P . . . a n −→ P n = P ′ .Given a TCP process expression P , the transition system T ( P ) = ( S P , −→ P , ↑ P , ↓ P ) associated with P is defined as follows:1. the set of states S P consists of all process expressions reachable from P ; Sequential Composition inthe Presence ofIntermediate Termination ↓ a . P a −→ PP a −→ P ′ P + P a −→ P ′ P a −→ P ′ P + P a −→ P ′ P ↓ P + P ↓ P ↓ P + P ↓ P a −→ P ′ a < I C ′ [ P k P ] C ′ a −→ [ P ′ k P ] C ′ P a −→ P ′ a < I C ′ [ P k P ] C ′ a −→ [ P k P ′ ] C ′ P c ? d −→ P ′ P c ! d −→ P ′ c ∈ C ′ [ P k P ] C ′ τ −→ [ P ′ k P ′ ] C ′ P c ! d −→ P ′ P c ? d −→ P ′ c ∈ C ′ [ P k P ] C ′ τ −→ [ P ′ k P ′ ] C ′ P ↓ P ↓ [ P k P ] C ′ ↓ P ↓ P ↓ P · P ↓ P a −→ P ′ P · P a −→ P ′ · P P ↓ P a −→ P ′ P · P a −→ P ′ P a −→ P ′ ( N def = P ) ∈ EN a −→ P ′ P ↓ ( N def = P ) ∈ EN ↓ Figure 1: The operational Semantics of TCP2. the set of transitions −→ P is the restriction to S P of the transition relation defined on all processexpressions by the structural operational semantics, i.e., S P of the −→ P = −→ ∩ ( S P × A τ × S P );3. ↑ P = P ; and4. the set of final states ↓ P consists of all process expressions Q ∈ S P such that Q ↓ , i.e., ↓ P = ↓ ∩S P .We also use the process calculus TSP in later sections. It is obtained by excluding the parallelcomposition operator from TCP.The notion of behavioural equivalence has been used extensively in the theory of process calculi. Wefirst introduce the notion of strong bisimulation [12, 13], which does not distinguish τ -transitions fromother labelled transitions. Definition 2.
A binary symmetric relation R on a transition system ( S , −→ , ↑ , ↓ ) is a strong bisimulation if, for all states s , t ∈ S , s R t implies1. if s a −→ s ′ , then there exist t ′ ∈ S , such that t a −→ t ′ , and s ′ R t ′ ;2. if s ↓ , then t ↓ .The states s and t are strongly bisimilar (notation: s ↔ t) if there exists a strong bisimulation R s.t. s R t. The notion of strong bisimilarity does not take into account the intuition associated with τ that itstands for unobservable internal activity. To this end, we proceed to introduce the notion of (divergence-preserving) branching bisimilarity, which does treat τ -transitions as unobservable. Divergence-preservingbranching bisimilarity in this paper which is the finest behavioural equivalence in van Glabbeek’s lineartime - branching time spectrum [8]. Let −→ be an A τ -labelled transition relation on a set S , and let a ∈ A τ ; we write s ( a ) −→ t for the formular “ s a −→ t ∨ ( a = τ ∧ s = t )”. Furthermore, we denote the transitiveclosure of τ −→ by −→ + and the reflexive-transitive closure of τ −→ by −→ ∗ .os Baeten, BasLuttik &FeiYang 5 Definition 3.
Let T = ( S , −→ , ↑ , ↓ ) be a transition system. A branching bisimulation is a symmetricrelation R ⊆ S × S such that for all states s , t ∈ S , s R t implies1. if s a −→ s ′ , then there exist t ′ , t ′′ ∈ S , such that t −→ ∗ t ′′ ( a ) −→ t ′ , s R t ′′ and s ′ R t ′ ;2. if s ↓ , then there exists t ′ such that t −→ ∗ t ′ and t ′ ↓ ; andThe states s and t are branching bisimilar (notation: s ↔ b t) if there exists a branching bisimulation R ,s.t. s R t.A branching bisimulation R is divergence-preserving if, for all states s and t, s R t implies3. if there exists an infinite sequence ( s i ) i ∈ N such that s = s , s i τ −→ s i + and s i R t for all i ∈ N , thenthere exists a state t ′ such that t −→ + t ′ and s i R t ′ for some i ∈ N .The states s and t are divergence-preserving branching bisimilar (notation: s ↔ ∆ b t) if there exists adivergence-preserving branching bisimulation R such that s R t. We call the largest divergence-preserving branching bisimulation relation divergence-preserving branch-ing bisimilarity . Note that divergence-preserving branching bisimulation relations are equivalence rela-tions [9].
Definition 4.
An equivalence relation R on a process calculus C is called a congruence if s i R t i fori = , ..., ar ( f ) implies f ( s , . . . , s ar ( f ) ) R f ( t , . . . , t ar ( f ) ) , where f is an operator of C, ar ( f ) is the arity off , and s i , t i are processes defined in C. Divergence-preserving branching bisimulation relation is not a congruence with respect to most pro-cess calculi. A rootedness condition needs to be introduced.
Definition 5.
A divergence-preserving branching bismulation relation R on a transition system ( S , −→ , ↑ , ↓ ) satisfies rootedness condition on a pair of states s , s ∈ S , if s R s and1. if s a −→ s ′ , then s a −→ s ′ for some s ′ such that s ′ R s ′ ;2. if s ↓ , then s ↓ .s and s are rooted divergence-preserving branching bisimilar (notation: s ↔ ∆ rb s ) if there exists adivergence-preserving branching bisimulation R , such that s R s , and it satisfies rootedness conditionon s and s . We can extend the above relations ( ↔ , ↔ b , ↔ ∆ b , and ↔ ∆ rb ) to relations over two transition systemsby taking the union of two disjoint transition systems, and two transition systems are bisimilar if theirinitial states are bisimilar in the union. Namely, for two transition systems T = ( S , −→ , ↑ , ↓ ) and T = ( S , −→ , ↑ , ↓ ), we make the following pairing on their states. We pair every state s ∈ S with1 and every state s ∈ S with 2. We have T ′ i = ( S ′ i , −→ ′ i , ↑ ′ i , ↓ ′ i ) for i = , S ′ i = { ( s , i ) | s ∈ S i } , −→ ′ i = { (( s , i ) , a , ( t , i )) | ( s , a , t ) ∈−→ i } , ↑ ′ i = ( ↑ i , i ), and ↓ ′ i = { ( s , i ) | s ∈↓ i } . We say T ≡ T if their exists abehaviour equivalence ≡ on T = ( S ′ ∪ S ′ , −→ ′ ∪ −→ ′ , ↑ ′ , ↓ ′ ∪ ↓ ′ ) such that ↑ ′ ≡↑ ′ . Process expressions that have the option to terminate are transparent in a sequential context: if P has theoption to terminate and Q a −→ Q ′ , then P · Q a −→ Q ′ even if P can still do transitions. In this section weshall explain how transparency gives rise to two phenomena that are undesirable in certain circumstances. Sequential Composition inthe Presence ofIntermediate Termination X start X · Y X · Y X · Y n − X · Y n Y n Y n − Y Y ab ab b ab bcc ccc cccccc Figure 2: A transition system with unboundedly branching behaviourFirst, it facilitates the specification of unboundedly branching behaviour with a guarded recursive speci-fication over TSP. Second, it gives rise to forgetful stacking of variables, and as a consequence it is notclear how to specify an always terminating half-counter.We first discuss process expressions with unbounded branching. I has been a well-known result fromformal language theory that the context-free languages are exactly the languages accepted by pushdownautomata. The process-theoretic formulation of this result is that every transition system specified by aTSP specification is language equivalent to the transition system associated with a pushdown automatonand, vice versa, every transition system associated with a pushdown automaton is language equivalent tothe transition system associated with some TSP specification. The correspondence fails, however, whenlanguage equivalence is replaced by (strong) bisimilarity. The current best known result is that for everycontext-free process, there is a pushdown process to simulate it modulo contra simulation [3]. However,we have not succeeded in improving the result to branching bisimulation. The reason is that the context-free processes might have unbounded branching degree when there are unboundedly many transparencyprocess expressions connected by sequential composition. Consider the following process: X = a . X · Y + b . Y = c . + . The transition system of the above process is illustrated in Figure 2. Note that X is the initial state,and every state in the second row is a terminating state. For the state Y n , it has n c -labelled transitions to , Y , Y , . . . , Y n − , respectively. Therefore, every state in this transition system has finitely many transitionsleading to distinct states, whereas there is no upper bound on the number of transitions from each state.In this case, we say that this transition system has unboundedly branching degree.we can prove that the process defined by the TSP specification above is not strongly bisimilar to apushdown process since it is unbounded branching, whereas a pushdown process is always boundedlybranching. This correspondence does hold for contra simulation [3], and that it is an open problem as towhether the correspondence holds modulo branching bisimilarity. In Section 5, we show that with therevised composition operator, we can remove such unbounded branching and establish a correspondencebetween pushdown process and context-free process modulo strong bisimilarity.Now we proceed to discuss the phenomenon of forgetfulness. A process calculus with iteration andnesting is introduced by Bergstra, Bethke and Ponse [5, 6] in which a binary nesting operator ♯ and aos Baeten, BasLuttik &FeiYang 7 P ∗ ↓ P a −→ P ′ P ∗ a −→ P ′ · P ∗ P a −→ P ′ P ♯ P a −→ P ′ · P ♯ P · P P a −→ P ′ P ♯ P a −→ P ′ P ↓ P ♯ P ↓ . Figure 3: The operational semantics of nesting and iteration CC start CC CC CC n − CC n BB n BB n − BB BB BB ab ab b ab baaac Figure 4: The transition system of a half counterKleene star operator ∗ are added. In this paper, we add these two operators to TCP. They are intuitivelygiven by the following equations where we use equality to represent strong bisimilarity: P ∗ = P · P ∗ + P ♯ P = P · ( P ♯ P ) · P + P We give the operational semantics of these two operators in Figure 3.Bergstra et al. show how one can specify a half counter using iteration and nesting, which then allowsthem to conclude that the behaviour of a Turing machine can be simulated in the calculus with iterationand nesting [5, 6].The half counter is specified as follows: CC n = a . CC n + + b . BB n ( n ∈ N ) BB n = a . BB n − ( n ≥ BB = c . CC . The behaviour of a half counter is illustrated in Figure 4. The initial state is CC . From CC , it is able tomake an arbitrary number of a transitions. At some point, it stops counting with a b -labelled transition,and then makes the same number of a -labelled transitions to the state BB . In state BB , a zero testingtransition is labelled by c which leads back to the state CC .An implementation in TCP with iteration and nesting is provided in [6] as follows: HCC = (( a ♯ b ) · c ) ∗ . It is straightforward to establish that (( a ♯ b ) · a n · c ) · HCC is equivalent to CC n for all n ≥ a n · c ) · HCC is equivalent to BB n for all n ∈ N modulo strong bisimilarity. Sequential Composition inthe Presence ofIntermediate Termination P ↓ P ↓ P ; P ↓ P a −→ P ′ P ; P a −→ P ′ ; P P ↓ P a −→ P ′ P P ; P a −→ P ′ . Figure 5: The revised semantics of sequential compositionIn a context with intermediate termination, one may wonder if it is possible to generalize their result.It is, however, not clear how to specify an always terminating half counter. At least, a naive generalisationof the specification of Bergstra et al. does not do the job. The culprit is forgetfulness. We define a halfcounter that terminates in every state as follows: C n = a . C n + + b . B n + ( n ∈ N ) B n = a . B n − + ( n ≥ B = c . C + . The following implementation is no longer valid: HC = (( a + ) ♯ ( b + ) · ( c + )) ∗ . Note that due to transparency, (( a + ) n · ( c + )) · HC is no longer equivalent to B n modulo any behaviouralequivalence for n > B n only has an a -labelled transition to B n − whereas the other process hasat least n + HC , ( c + ) · HC , ( a + ) · ( c + ) · HC , . . . , ( a + ) n − · ( c + ) · HC ,respectively. This process may choose to “forget” the transparent process expressions that has beenstacked with sequential composition operator. The forgetfulness leads to the failure to implement aterminating half counter.In Section 6, we show that with the revised semantics, the forgetfulness would be eliminated. There-fore, we show that we are able to implement a terminating half counter with the revised semantics andwe shall provide a reactively Turing powerful process calculus. We propose a calculus TCP ; with a new sequential composition operator. Its syntax obtained by replacingthe sequential composition operator · by ; in the syntax of TCP. Note that we also use the abbreviationof P n as we did for the standard version of the sequential composition operator.The structural operational semantics of ; is defined in Figure 5. We use P to denote that theredoes not exist a closed term P ′ such that P a −→ P ′ is derivable from the operational rules. With therevised semantics, processes with intermediate termination (option to terminate and option to do anaction) lose their transparency in a sequential context. As a consequence, the branching degree of acontext-free process is bounded and sequential compositions may have the option to terminate, withoutbeing forgetful.Let us revisit the example in Section 3. We rewrite it with the revised sequential composition opera-tor: X = a . X ; Y + b . Y = c . + . os Baeten, BasLuttik &FeiYang 9 X start X ; Y X ; Y X ; Y n − X ; Y n Y n Y n − Y Y ab ab b ab bccc Figure 6: The transition system in the revised semanticsIts transition system is illustrated in Figure 6. Every state in the transition system now has a boundedbranching degree. For instance, a transition from Y to Y is abandoned because Y has a transition andonly the transition from the first Y in the sequential composition is allowed.Congruence is an important property to fit a behavioural equivalence into an axiomatic framework.We show that in the revised semantics, ↔ ∆ rb is a congruence. Note that the congruence property can alsobe inferred from a recent result of Fokknik, van Glabbeek and Luttik [7]. Theorem 1. ↔ ∆ rb is a congruence with respect to TCP ; .Proof. We use the following facts:1. Rooted divergence-preserving branching bisimilarity is a rooted divergence-preserving branchingbisimulation relation; and2. rooted divergence-preserving branching bisimilarity is a subset of divergence-preserving branch-ing bisimilarity.We show that ↔ ∆ rb is compatible for each operator a ., + , ; , k .1. Suppose that P ↔ ∆ rb Q , we show that a . P ↔ ∆ rb a . Q . To this end, we verify that R = { ( a . P , a . Q ) | P ↔ ∆ rb Q } ∪ ↔ ∆ rb is a rooted divergence-preserving branching bisimulation relation.To prove that the pair ( a . P , a . Q ) with P rooted divergence-preserving branching bisimilar to Qsatisfies condition 1 of Definition , suppose that a . P b −→ P ′ . Then, according to the operationalsemantics, b = a and P ′ = P . By the operational semantics, we also have that a . Q a −→ Q and, byassumption, P and Q are divergence-preserving branching bisimilar.For the termination condition, it is trivially satisfied since both processes do not terminate. Thedivergence-preserving condition is also satisfied since only an a -labelled transition is allowed fromboth processes.2. Suppose that P ↔ ∆ rb Q and P ↔ ∆ rb Q , we show that P + P ↔ ∆ rb Q + Q . To this end, weverify that R = { ( P + P , Q + Q ) | P ↔ ∆ rb Q , P ↔ ∆ rb Q } ∪↔ ∆ rb is a rooted divergence-preservingbranching bisimulation relation.Suppose that P + P a −→ P ′ ; then we have P a −→ P ′ or P a −→ P ′ . We only consider the first case.Since P ↔ ∆ rb Q , we have Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ . Then we have Q + Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ .The same argument holds for the symmetrical case.If P + P ↓ , then we have either P ↓ or P ↓ . Without loss of generality, we suppose that P ↓ .Since P ↔ ∆ rb Q , we have Q ↓ . Therefore, Q + Q ↓ .0 Sequential Composition inthe Presence ofIntermediate TerminationMoreover, we verify that the divergence preservation condition is satisfied.Hence, R is a rooted divergence-preserving branching bisimulation relation.3. Suppose that P ↔ ∆ rb Q and P ↔ ∆ rb Q , we show that [ P k P ] C ′ ↔ ∆ rb [ Q k Q ] C ′ . To this end, weverify that R = { ([ P k P ] C ′ , [ Q k Q ] C ′ ) | P ↔ ∆ rb Q , P ↔ ∆ rb Q } ∪ ↔ ∆ rb is a rooted divergence-preserving branching bisimulation relation.We first show that R ′ = { ([ P k P ] C ′ , [ Q k Q ] C ′ ) | P ↔ ∆ b Q , P ↔ ∆ b Q } ∪ ↔ ∆ b is a divergence-preserving branching bisimulation.Suppose that [ P k P ] C ′ a −→ P ′ ; then we distinguish several cases.(a) If P a −→ P ′ a < I C ′ and P ′ = [ P ′ k P ] C ′ , then, since P ↔ ∆ b Q , we have Q −→ ∗ Q ′′ a −→ Q ′ with P ′ ↔ ∆ b Q ′ and P ↔ ∆ b Q ′′ . Then we have [ Q k Q ] C ′ −→ ∗ [ Q ′′ k Q ] C ′ a −→ [ Q ′ k Q ] C ′ with P ↔ ∆ b Q ′′ , P ′ ↔ ∆ b Q ′ and P ↔ ∆ b Q . Thus we have ([ P ′ k P ] C ′ , [ Q ′ k Q ] C ′ ) ∈ R ′ and([ P k P ] C ′ , [ Q ′′ k Q ] C ′ ) ∈ R ′ .(b) If P c ? d −→ P ′ , P c ! d −→ P ′ and c ∈ C ′ , then [ P k P ] C ′ τ −→ [ P ′ k P ′ ] C ′ . Since P ↔ ∆ b Q and P ↔ ∆ b Q , we have Q −→ ∗ Q ′′ c ? d −→ Q ′ , Q −→ ∗ Q ′′ c ! d −→ Q ′ with P ′ ↔ ∆ b Q ′ , P ↔ ∆ b Q ′′ , P ′ ↔ ∆ b Q ′ , and P ↔ ∆ b Q ′′ . Then we have [ Q k Q ] C ′ −→ ∗ [ Q ′′ k Q ′′ ] C ′ τ −→ [ Q ′ k Q ′ ] C ′ with P ↔ ∆ b Q ′′ , P ↔ ∆ b Q ′′ , P ′ ↔ ∆ b Q ′ and P ′ ↔ ∆ b Q ′ . Thus we have ([ P k P ] C ′ , [ Q ′′ k Q ′′ ] C ′ ) ∈R ′ and ([ P ′ k P ′ ] C ′ , [ Q ′ k Q ′ ] C ′ ) ∈ R ′ .If [ P k P ] C ′ ↓ , then we have P ↓ and P ↓ . Since P ↔ ∆ b Q and P ↔ ∆ b Q , we have Q −→ ∗ Q ′ ↓ and Q −→ ∗ Q ′ ↓ for some Q ′ and Q ′ . Therefore, [ Q k Q ] C ′ −→ ∗ [ Q ′ k Q ′ ] C ′ ↓ .Hence, we have R ′ is a divergence-preserving branching bisimulation relation.Now we show that R is a rooted divergence-preserving branching bisimulation. Suppose that[ P k P ] C ′ a −→ P ′ ; then we distinguish several cases.(a) If P a −→ P ′ a < I C ′ and P ′ = [ P ′ k P ] C ′ , then, since P ↔ ∆ rb Q , we have Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ . Then we have [ Q k Q ] C ′ a −→ [ Q ′ k Q ] C ′ with P ′ ↔ ∆ b Q ′ and P ↔ ∆ b Q . Thuswe have [ P ′ k P ] C ′ ↔ ∆ b [ Q ′ k Q ] C ′ .(b) If P c ? d −→ P ′ , P c ! d −→ P ′ and c ∈ C ′ , then [ P k P ] C ′ τ −→ [ P ′ k P ′ ] C ′ . Since P ↔ ∆ rb Q and P ↔ ∆ rb Q , we have Q c ? d −→ Q ′ , Q c ! d −→ Q ′ with P ′ ↔ ∆ b Q ′ and P ′ ↔ ∆ b Q ′ . Then we have[ Q k Q ] C ′ τ −→ [ Q ′ k Q ′ ] C ′ with P ′ ↔ ∆ b Q ′ and P ′ ↔ ∆ b Q ′ . Thus we have [ P ′ k P ′ ] C ′ ↔ ∆ b [ Q ′ k Q ′ ] C ′ .If [ P k P ] C ′ ↓ , then we have P ↓ and P ↓ . Since P ↔ ∆ rb Q and P ↔ ∆ rb Q , we have Q ↓ and Q ↓ . Therefore, [ Q k Q ] C ′ ↓ .Moreover, we verify that the divergence preservation condition is satisfied.Hence, we have R is a rooted divergence-preserving branching bisimulation relation.4. Suppose that P ↔ ∆ rb Q and P ↔ ∆ rb Q , we show that P ; P ↔ ∆ rb Q ; Q . To this end, we verify that R = { ( P ; P , Q ; Q ) | P ↔ ∆ rb Q , P ↔ ∆ rb Q } ∪ ↔ ∆ rb is a rooted divergence-preserving branchingbisimulation relation.We first show that R ′ = { ( P ; P , Q ; Q ) | P ↔ ∆ b Q , P ↔ ∆ rb Q } ∪ ↔ ∆ b is a divergence-preservingbranching bisimulation relation.Suppose that P ; P a −→ P ′ ; then we distinguish several cases.os Baeten, BasLuttik &FeiYang 11 P ∗ ↓ P a −→ P ′ P ∗ a −→ P ′ ; P ∗ P a −→ P ′ P ♯ P a −→ P ′ ; P ♯ P ; P P a −→ P ′ P ♯ P a −→ P ′ P ↓ P ♯ P ↓ Figure 7: The revised semantics of iteration and nesting(a) If P a −→ P ′ , then P ′ = P ′ ; P . Since P ↔ ∆ b Q , we have Q −→ ∗ Q ′′ a −→ Q ′ with P ′ ↔ ∆ b Q ′ and P ↔ ∆ b Q ′′ . Then we have Q ; Q −→ ∗ Q ′′ ; Q a −→ Q ′ ; Q with P ↔ ∆ b Q ′′ , P ′ ↔ ∆ b Q ′ ,and P ↔ ∆ rb Q . Thus, we have ( P ′ ; P , Q ′ ; Q ) ∈ R ′ and ( P ; P , Q ′′ ; Q ) ∈ R ′ .(b) If P ↓ , P a −→ P ′ and P . Since P ↔ ∆ b Q and P ↔ ∆ rb Q , we have Q −→ ∗ Q ′ ↓ , Q ′ for some Q ′ with P ↔ ∆ b Q ′ , and Q a −→ Q ′ , with P ′ ↔ ∆ b Q ′ . Then, we have Q ; Q −→ ∗ Q ′ ; Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ and P ↔ ∆ b Q ′ . Thus we have ( P ′ , Q ′ ) ∈ R ′ and ( P ; P , Q ′ ; Q ) ∈ R .If P ; P ↓ , then we have P ↓ and P ↓ . Since P ↔ ∆ b Q and P ↔ ∆ rb Q , we have Q −→ ∗ Q ′ ↓ for some Q ′ and Q ↓ . Therefore, Q ; Q −→ ∗ Q ′ ; Q ↓ .Moreover, we verify that the divergence preservation condition is satisfied.Hence, we have R is a divergence-preserving branching bisimulation relation.Now we show that R is a rooted divergence-preserving branching bisimulation relation.We suppose that P ; P a −→ P ′ , we distinguish several cases:(a) If P a −→ P ′ , then P ′ = P ′ ; P . Since P ↔ ∆ rb Q , we have Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ . Thenwe have Q ; Q a −→ Q ′ ; Q with P ′ ↔ ∆ b Q ′ and P ↔ ∆ rb Q . Thus, we have P ′ ; P ↔ ∆ b Q ′ ; Q .(b) If P ↓ , P a −→ P ′ and P . Since P ↔ ∆ rb Q and P ↔ ∆ rb Q , we have Q ↓ , Q a −→ Q ′ ,with P ′ ↔ ∆ b Q ′ , and Q . Then, we have Q ; Q a −→ Q ′ with P ′ ↔ ∆ b Q ′ .If P ; P ↓ , then we have P ↓ and P ↓ . Since P ↔ ∆ rb Q and P ↔ ∆ rb Q , we have Q ↓ and Q ↓ .Therefore, Q ; Q ↓ .Moreover, we verify that the divergence preservation condition is satisfied.Hence, we have R is a rooted divergence-preserving branching bisimulation relation. (cid:3) We also define a version of TCP with iteration and nesting (TCP ♯ ) in the revised semantics. Byremoving recursive specification and adding a non-regular iterator, we get TCP ♯ as a variation of TCP ; with two additional operators: P ∗ , P ♯ P . The operational semantics is defined in Figure 7. The relationship between context-free processes and pushdown processes has been studied extensivelyin the literature [3].2 Sequential Composition inthe Presence ofIntermediate TerminationWe consider the process calculus
Theory of Sequential Processes (TSP ; ) with the revised semanticsof sequential composition operator as follows: P = | | a . P | P ; P | P + P | N . We give the definition of context-free processes as follows:
Definition 6. A context-free process is the bisimulation equivalence class of the transition system gen-erated by a finite guarded recursive specification over Sequential Algebra TSP ; . Note that every context-free process can be rewritten into a Greibach normal form. In this paper,we only consider all the context-free processes (guarded recursive specifications) written in Greibachnormal form: X = X i ∈ I X α i .ξ i ( + ) . In this form, every right-hand side of every equation consists of a number of summands, indexed bya finite set I X (the empty sum is ), each of which is , or of the form α i .ξ i , where ξ i is the sequentialcomposition of a number of names (the empty sequence is ).We shall show that every context-free process is equivalent to a pushdown automata modulo strongbisimilarity. The notion of pushdown automata is defined as follows: Definition 7.
A pushdown automaton (PDA) is a -tuple ( S , Σ , D , −→ , ↑ , Z , ↓ ) , where1. S is a finite set of states ,2. Σ is a finite set of input symbols ,3. D is a finite set of stack symbols ,4. −→⊆ S × Σ × D × Σ ∗ S is a finite transition relation , (we write s a [ d /δ ] −→ t for ( s , d , a , δ, t ) ∈−→ ),5. ↑∈ S is the initial state ,6. Z ∈ D is the initial stack symbol , and7. ↓⊆ S is a set of accepting states . We use a sequence of stack symbols δ ∈ D ∗ to represent the contents of a stack. We associate withevery pushdown automaton a labelled transition system. We call the transition system associated with apushdown automaton a pushdown process. Definition 8.
Let M = ( S , Σ , D , −→ , ↑ , Z , ↓ ) be a PDA. The pushdown process T ( M ) = ( S T , −→ T , ↑ T , ↓ T ) associated M is defined as follows:1. its set of states is the set S T = { ( s , δ ) | s ∈ S , δ ∈ D ∗ } of all configurations of M ;2. its transition relation −→ T ⊂ S T × A τ × S T is the least relation satisfying, for all a ∈ Σ , d ∈ D , δ, δ ′ ∈ D ∗ : ( s , d δ ) a −→ T ( t , δ ′ δ ) i ff s a [ d /δ ′ ] −→ t.3. its initial state is the configuration ↑ T = ( ↑ , Z ) , and4. its set of final states is the set ↓ T = { ( s , δ ) | s ∈ S , s ↓ , δ ∈ D ∗ } . os Baeten, BasLuttik &FeiYang 13Consider a context-free process in Greibach normal form defined by a set of names V = { X , X , . . . , X m } where X j = X i ∈ I Xj α i j .ξ i j ( + ) , and X is the initial state.We introduce the following functions for a sequence of names ξ length : V ∗ → N , length ( ξ ) computes the length of ξ ;2. get : V ∗ × N → V , get ( ξ, i ) computes the i -th name of ξ ;3. su ff set : V ∗ × N → |V| , su ff set ( ξ, i ) = { get ( ξ, j ) | j = i + , . . . length ( ξ ) } computes the set that containsall the names in the su ffi x which starts from the i -th name of ξ .Next we define a PDA M = ( S , Σ , D , −→ , ↑ , Z , ↓ ) to simulate the transition system associated with X asfollows:1. S = { s D | D ⊆ V} ;2. Σ = A τ ;3. D = V ∪ { X † | X ∈ V} ;4. the set of transitions −→ is defined as follows: −→ = { ( s D , X † j , α i j , δ ( s D , X † j , ξ i j ) , s D ( s D , X † j ,ξ ij ) ) | i ∈ I X j , j = , . . . , n , D ⊂ V}∪ { ( s D , X j , α i j , δ ( s D , X j , ξ i j ) , s D ( s D , X j ,ξ ij ) ) | i ∈ I X j , j = , . . . , n , D ⊂ V} . where δ ( s D , X † j , ξ i j ) is a string of length length ( ξ i j ) defined as follows: for k = , . . . , length ( ξ i j ), welet X k = get ( ξ i j , k ),(a) if X k < ( D / { X j } ) ∪ su ff set ( ξ i j , k ), then the k -th symbol of δ ( s D , X † j , ξ i j ) is X † k ,(b) otherwise, the k -th symbol of δ ( s D , X † j , ξ i j ) is X k , δ ( s D , X j , ξ i j ) is a string of length length ( ξ i j ) defined as follows: for k = , . . . , length ( ξ i j ), we let X k = get ( ξ i j , k ),(a) if X k < D ∪ su ff set ( ξ i j , k ), then the k -th symbol of δ ( s D , X j , ξ i j ) is X † k ,(b) otherwise, the k -th symbol of δ ( s D , X j , ξ i j ) is X k , andwe also define D ( s D , X † j , ξ i j ) = ( D / { X j } ) ∪ su ff set ( ξ i j ,
0) and D ( s D , X j , ξ i j ) = D ∪ su ff set ( ξ i j , ↑ = s { X } ;6. Z = X † ;7. ↓ = { s D | if for all X ∈ D , X ↓} .We observe that every process expression ξ is simulated by a configuration of M such that the se-quence of names in ξ is stored in the stack. The first appearance of every name from the bottom of thestack is marked with † . The state is marked by a set that contains all the names in ξ . A state is terminatingif and only if all the names in the set that marks the state are terminating. We show the following result: Lemma 1. T ( X ) ↔ T ( M ) . Proof.
We first define an auxiliary function stack : V ∗ → D ∗ as follows: given ξ ∈ V ∗ , for k = , . . . , length ( ξ ),we let X k = get ( ξ, k ),1. if X k < su ff set ( ξ, k ), then the k -th the element of stack ( ξ ) is X † k ,2. otherwise, the k -the the element of stack ( ξ ) is X k ,Note that stack ( X ξ ) and stack ( ξ ) share the same su ffi x of length length ( ξ ).We show that the following relation: R = { ( ξ, ( s su ff set ( ξ, , stack ( ξ ))) | ξ ∈ V ∗ } , is a strong bisimulation.We rewrite ξ as X j ξ ′ , then it has the following transitions: X j ξ ′ α ij −→ ξ i j ξ ′ , i ∈ I X j . We need to show that they are simulated by the transitions:( s su ff set ( ξ, , stack ( ξ )) α ij −→ ( s su ff set ( ξ ij ξ ′ , , stack ( ξ i j ξ ′ )) , i ∈ I X j . Thus we have ( x i j , ( s su ff set ( ξ ij ξ ′ , , stack ( ξ i j ξ ′ ))) ∈ R .We consider the configuration ( s su ff set ( ξ, , stack ( ξ )), we distinguish two cases of the top symbol of thestack.1. If get ( stack ( ξ ) , = X † j , then M has the transition( s su ff set ( ξ, , X † j , α i j , δ ( s su ff set ( ξ, , X † j , ξ i j ) , s D ( s su ff set ( ξ, , X † j ,ξ ij ) ) . The new stack is S = δ ( s su ff set ( ξ, , X † j , ξ i j ) stack ( ξ ′ ). We verify that S = stack (() ξ i j ξ ′ ). Note that theyshare the same su ffi x stack ( ξ ′ ). We only needs to verify the first length ( ξ i j ) elements. For the l -thelement, we let X l = get ( ξ i j , l ), and we distinguish with two cases.(a) If X l < ( su ff set ( ξ, / { X j } ) ∪ su ff set ( ξ i j , l ), then the l -th element of S is X † l . Since get ( stack ( ξ ) , = X † j , from the definition of stack , we have X j < su ff set ( ξ, = su ff set ( ξ ′ , su ff set ( ξ, / { X j } = su ff set ( ξ ′ , X l < su ff set ( ξ ′ , ∪ su ff set ( ξ i j , l ). Moreover, we have X l < su ff set ( ξ i j ξ ′ , l ),therefore, the l -th element of stack ( ξ i j ξ ′ ) is also X † l .(b) Otherwise, then the l -th element of S is X l . By the definition of stack , we get that the l -thelement of stack ( ξ i j ξ ′ ) is also X l .Moreover, we verify that the new state s D ( s su ff set ( ξ, , X † j ,ξ ij ) = s su ff set ( ξ ij ξ ′ , . Note that we have D ( s su ff set ( ξ, , X † j , ξ i j ) = ( su ff set ( ξ, / { X j } ) ∪ su ff set ( ξ i j , = su ff set ( ξ ′ , ∪ su ff set ( ξ i j , = su ff set ( ξ i j ξ ′ , . Hence, we have ( s su ff set ( ξ, , stack ( ξ )) α ij −→ ( s su ff set ( ξ ij ξ ′ , , stack ( ξ i j ξ ′ )).2. if get ( stack ( ξ ) , = X j , then M has the transition( s su ff set ( ξ, , X j , α i j , δ ( s su ff set ( ξ, , X j , ξ i j ) , s D ( s su ff set ( ξ, , X j ,ξ ij ) ) , The ne stack is S = δ ( s su ff set ( ξ, , X j , ξ i j ) stack ( ξ ′ ). We verify that S = stack (() ξ i j ξ ′ ). Note that theyshare the same su ffi x stack ( ξ ′ ). We only needs to verify the first length ( ξ i j ) elements. For the l -thelement, we let X l = get ( ξ i j , l ), and we distinguish with two cases.os Baeten, BasLuttik &FeiYang 15(a) If X l < ( su ff set ( ξ, ∪ su ff set ( ξ i j , l ), then the l -th element of S is X † l . Since get ( stack ( ξ ) , = X j , from the definition of stack , we have X j ∈ su ff set ( ξ, = su ff set ( ξ ′ , su ff set ( ξ, = su ff set ( ξ ′ , X l < su ff set ( ξ ′ , ∪ su ff set ( ξ i j , l ). Moreover, we have X l < su ff set ( ξ i j ξ ′ , l ),therefore, the l -th element of stack ( ξ i j ξ ′ ) is also X † l .(b) Otherwise, then the l -th element of S is X l . By the definition of stack , we get that the l -thelement of stack ( ξ i j ξ ′ ) is also X l .Moreover, we verify that the new state s D ( s su ff set ( ξ, , X j ,ξ ij ) = s su ff set ( ξ ij ξ ′ , . Note that we have D ( s su ff set ( ξ, , X j , ξ i j ) = su ff set ( ξ, ∪ su ff set ( ξ i j , = su ff set ( ξ ′ , ∪ su ff set ( ξ i j , = su ff set ( ξ i j ξ ′ , . Hence, we have ( s su ff set ( ξ, , stack ( ξ )) α ij −→ ( s su ff set ( ξ ij ξ ′ , , stack ( ξ i j ξ ′ )).By concluding the two cases, the above transitions are correct.Using a similar analysis, we also have all the transitions from ( s su ff set ( ξ, , stack ( ξ )) are simulated by X j ξ ′ .Now we consider the termination condition. ξ ↓ i ff for all X ∈ su ff set ( ξ, X ↓ . Note that ( s su ff set ( ξ, , stack ( ξ )) ↓ i ff for all X ∈ su ff set ( ξ, X ↓ . Therefore, termination condition is also verified.Hence, we have T ( X ) ↔ T ( M ). (cid:3) We have the following theorem.
Theorem 2.
For every context-free process P, there exists a PDA M , such that T ( P ) ↔ T ( M ) . In this section, we shall discuss the theory of executability in the context of termination. We shall provethat TCP ♯ is reactively Turing powerful in the context of termination.The notion of reactive Turing machines (RTM) [4] was introduced as an extension of Turing machinesto define which behaviour is executable by a computing system in terms of labelled transition systems.The definition of RTMs is parameterised with the set A τ , which we assume to be a finite set. Furthermore,the definition is parameterised with another finite set D of data symbols . We extend D with a specialsymbol (cid:3) < D to denote a blank tape cell, and denote the set D ∪ { (cid:3) } of tape symbols by D (cid:3) . Definition 9 (Reactive Turing Machine) . A reactive Turing machine (RTM) is a quadruple ( S , −→ , ↑ , ↓ ) ,where1. S is a finite set of states ,2. −→ ⊆ S × D (cid:3) × A τ × D (cid:3) × { L , R } × S is a finite collection of ( D (cid:3) × A τ × D (cid:3) × { L , R } ) -labelled transitions (we write s a [ d / e ] M −→ t for ( s , d , a , e , M , t ) ∈ −→ ),3. ↑ ∈ S is a distinguished initial state .4. ↓ ⊆ S is a finite set of final states . Intuitively, the meaning of a transition s a [ d / e ] M −→ t is that whenever the RTM is in state s , and d is thesymbol currently read by the tape head, then it may execute the action a , write symbol e on the tape(replacing d ), move the read / write head one position to the left or the right on the tape (depending onwhether M = L or M = R ), and then end up in state t .6 Sequential Composition inthe Presence ofIntermediate TerminationTo formalise the intuitive understanding of the operational behaviour of RTMs, we associate withevery RTM M an A τ -labelled transition system T ( M ). The states of T ( M ) are the configurations of M ,which consist of a state from S , its tape contents, and the position of the read / write head. We denote byˇ D (cid:3) = { ˇ d | d ∈ D (cid:3) } the set of marked symbols; a tape instance is a sequence δ ∈ ( D (cid:3) ∪ ˇ D (cid:3) ) ∗ such that δ contains exactly one element of the set of marked symbols ˇ D (cid:3) , indicating the position of the read / writehead. We adopt a convention to concisely denote an update of the placement of the tape head marker.Let δ be an element of D ∗ (cid:3) . Then by δ < we denote the element of ( D (cid:3) ∪ ˇ D (cid:3) ) ∗ obtained by placing thetape head marker on the right-most symbol of δ (if it exists), and ˇ (cid:3) otherwise. Similarly > δ is obtainedby placing the tape head marker on the left-most symbol of δ (if it exists), and ˇ (cid:3) otherwise. Definition 10.
Let M = ( S , −→ , ↑ , ↓ ) be an RTM. The transition system T ( M ) associated with M isdefined as follows:1. its set of states is the set C M = { ( s , δ ) | s ∈ S , δ a tape instance } of all configurations of M ;2. its transition relation −→ ⊆ C M × A τ × C M is a relation satisfying, for all a ∈ A τ , d , e ∈ D (cid:3) and δ L , δ R ∈ D ∗ (cid:3) : • ( s , δ L ˇ d δ R ) a −→ ( t , δ L < e δ R ) i ff s a [ d / e ] L −→ t, • ( s , δ L ˇ d δ R ) a −→ ( t , δ L e > δ R ) i ff s a [ d / e ] R −→ t;3. its initial state is the configuration ( ↑ , ˇ (cid:3) ) ; and4. its set of final states is the set { ( s , δ ) | δ a tape instance , s ↓} . Turing introduced his machines to define the notion of e ff ectively computable function in [15]. Byanalogy, the notion of RTM can be used to define a notion of e ff ectively executable behaviour . Definition 11 (Executability) . A transition system is executable if it is the transition system associatedwith some RTM.
In the theory of executability, we use the notion of executable transition systems to evaluate theabsolute expressiveness of process calculi in two aspects. On the one hand, if very transition system as-sociated with a process expression specified in a process calculus is executable modulo some behaviouralequivalence, then we say that the process calculus is executable modulo that behavioural equivalence.One the other hand, if every executable transition system is behavioural equivalent to some transitionsystem associated with a process expression specified in a process calculus modulo some behaviouralequivalence, then we say that the process calculus is reactively Turing powerful modulo that behaviouralequivalence.We only brief explain that both TCP ; and TCP ♯ are executable. We observe that their transitionsystems are e ff ective and does not contain any unbounded branching behaviour. Thus we can apply theresult from [4] and draw a conclusion that they are executable modulo ↔ ∆ b .Now we emphasis on showing that TCP ♯ is a reactively Turing powerful process calculus modulo ↔ ∆ b .We first introduce the notion of bisimulation up to ↔ b , which is a useful tool to establish the proofsin this section. Note that we adopt a non-symmetric bisimulation up to relation. Definition 12.
Let T = ( S , −→ , ↑ , ↓ ) a transition system. A relation R ⊆ S × S is a bisimulation up to ↔ b if, whenever s R s , then for all a ∈ A τ :1. if s −→ ∗ s ′′ a −→ s ′ , with s ↔ b s ′′ and a , τ ∨ s ′′ b s ′ , then there exists s ′ such that s a −→ s ′ ,s ′′ ↔ b ◦R s and s ′ ↔ b ◦R s ′ ; os Baeten, BasLuttik &FeiYang 17
2. if s a −→ s ′ , then there exist s ′ , s ′′ such that s −→ ∗ s ′′ a −→ s ′ , s ′′ ↔ b s and s ′ ↔ b ◦R s ′ ;3. if s ↓ , then there exists s ′ such that s −→ ∗ s ′ and s ′ ↓ ; and4. if s ↓ , then there exists s ′ such that s −→ ∗ s ′ and s ′ ↓ ; Lemma 2. If R is a bisimulation up to ↔ b , then R ⊆ ↔ b .Proof. It is su ffi cient to prove that ↔ b ◦R is a branching bisimulation, for ↔ b is an equivalence relation.Let s , s , s ∈ S and s ↔ b s R s .1. Suppose s a −→ s ′ . We distinguish two cases:(a) If a = τ and s ↔ b s ′ , then s ′ ↔ b s ↔ b s , so s ′ ↔ b ◦R s . It satisfies Condition 1 of thedefinition of branching bisimulation.(b) Otherwise, we have a , τ ∨ s b s ′ . Then, since s ↔ b s , according to Definition 3, thereexist s ′′ and s ′ such that s −→ ∗ s ′′ a −→ s ′ , s ↔ b s ′′ and s ′ ↔ b s ′ . Note that s ↔ b s ↔ b s ′′ ,and it is needed to apply Condition 1 of Definition 12. Then we have there exist s ′′ , s ′ and s ′ such that s a −→ s ′ and s ′′ ↔ b s ′′ R s and s ′ ↔ b s ′ R s ′ . Since s ′ ↔ b s ′ ↔ b s ′ and s ′ R s ′ , itfollows that s ′ ↔ b ◦R s ′ . It satisfies Condition 1 of the definition of branching bisimulation.2. If s a −→ s ′ , then according to Definition 12, there exist s ′′ and s ′ such that s −→ ∗ s ′′ a −→ s ′ , s ′′ ↔ b s and s ′ ↔ b ◦R s ′ , since s ↔ b s ↔ b s ′′ and s ′′ a −→ s ′ , by Definition 3, there exist s ′′ and s ′ such that s −→ ∗ s ′′ a ) −→ s ′ with s ′′ ↔ b s ′′ and s ′ ↔ b s ′ . Since s ′′ ↔ b ◦R s and s ′ ↔ b ◦R s ′ , itfollows that s ′′ ↔ b ◦R s and s ′ ↔ b ◦R s ′ . It satisfies the symmetry of Condition 1 of the definitionof branching bisimulation.The termination condition is also satisfied from Definition 12.Therefore, a branching bisimulation up to ↔ b is included in ↔ b . (cid:3) Next we show that TCP ♯ is a reactively Turing powerful by writing a specification of the reactiveTuring machine with TCP ♯ modulo ↔ ∆ b . The proof consists of five steps.1. We first write the specification of a terminating half counter;2. then we show that every regular process can be specified in TCP ♯ ;3. next we use two half counters and a regular process to encode a terminating stack;4. with two stacks and a regular process we can specify a tape; and5. finally we use a tape and a regular control process to specify an RTM.We first recall the following infinite specification in TSP ; of a terminating half counter: C n = a . C n + + b . B n + ( n ∈ N ) B n = a . B n − + ( n ≥ B = c . C + We provide a specification of a counter in TCP ♯ as follows: HC = (( a + ) ♯ ( b + ); ( c + )) ∗ We have the following lemma:8 Sequential Composition inthe Presence ofIntermediate Termination
Lemma 3. C ↔ ∆ b HCProof.
We verify that HC ↔ ∆ b C . Consider the following relation: R = { ( C , HC ) } ∪ { ( C n , ( a + ) ♯ ( b + ); ( a + ) n ; ( c + ); HC ) | n ≥ } ∪ { ( B n , ( a + ) n ; ( c + ); HC ) | n ∈ N } . We let R be the symmetrical relation of R . We show that R = R ∪ R is a divergence-preservingbranching bisimulation as follows:Note that R satisfies the divergence-preserving condition since there is no infinite sequence of τ transitions. In this prove, we only illustrate the pairs in R , since we can use the symmetrical argumentfor the pairs in R . We first consider the pair ( C , HC ). Note that C has the following transitions: C a −→ C , and C b −→ B , which are simulated by: HC a −→ ( a + ) ♯ ( b + ); ( a + ); ( c + ); HC , and HC b −→ ( c + ); HC , with ( C , ( a + ) ♯ ( b + ); ( a + ); ( c + ); HC ) ∈ R and ( B , ( c + ); HC ) ∈ R . Moreover, we have C ↓ and HC ↓ .Now we consider the pair ( C n , ( a + ) ♯ ( b + ); ( a + ) n ; ( c + ); HC ), with n ≥
1. Note that C n has thefollowing transitions: C n a −→ C n + , and C n b −→ B n , which are simulated by:( a + ) ♯ ( b + ); ( a + ) n ; ( c + ); HC a −→ ( a + ) ♯ ( b + ); ( a + ) n + ; ( c + ); HC , and( a + ) ♯ ( b + ); ( a + ) n ; ( c + ); HC b −→ ( a + ) n ; ( c + ); HC , with ( C n + , ( a + ) ♯ ( b + ); ( a + ) n + ; ( c + ); HC ) ∈ R and ( B n , ( a + ) n ; ( c + ); HC ) ∈ R . Moreover, wehave C n ↓ and ( a + ) ♯ ( b + ); ( a + ) n ; ( c + ); HC ↓ .Now we proceed to consider the pair ( B , ( c + ); HC ). Note that B has the following transition: B c −→ C , which is simulated by: ( c + ); HC c −→ HC , with ( C , HC ) ∈ R . Moreover, we have B ↓ and ( c + ); HC ↓ .Next we consider the pair ( B n , ( a + ) n ; ( c + ); HC ), with n ≥
1. Note that B n has the followingtransition: B n a −→ B n − , os Baeten, BasLuttik &FeiYang 19which is simulated by: ( a + ) n ; ( c + ); HC a −→ ( a + ) n − ; ( c + ); HC , with ( B n − , ( a + ) n − ; ( c + ); HC ) ∈ R . Moreover, we have B n ↓ and ( a + ) n ; ( c + ); HC ↓ .Hence, we have C ↔ ∆ b HC . (cid:3) Next we show that every regular process can be specified in TCP ♯ modulo ↔ ∆ b . A regular processwith at finite set of action labels A τ is given by P i = P nj = α i j ; P j + β i ( i = , . . . , n ) where α i j and β i arefinite sums of actions from A τ . We show the following lemma. Lemma 4.
Every regular process can be specified in TCP ♯ modulo ↔ ∆ b .Proof. We consider a regular process with at finite set of action labels A τ which is given by P i = P nj = α i j ; P j + β i ( i = , . . . , n ) where α i j and β i are finite sums of actions from A τ . We let c !0 , c !1 , . . . , c !( n + , c ?0 , c ?1 , . . . , c ?( n +
1) be labels that are not in A τ .Consider the following process: G i = n X j = α i j ; ( c ! j + ) + β i ; ( c !0 + ) M = n X j = ( c ? j + ); G j + ( c !( n + + ); ( c ?( n + + ) ♯ ( c ?0 + ) N = n + X j = ( c ? j + ); ( c ! j + ) ♯ (( c ?0 + ); ( c !0 + ))Note that ; is associative and we suppose that ; binds stronger than + . We verify that P i ↔ ∆ b [ G i ; M k N ] { c } . We let Q = (cid:16)P nj = ( c ? j + ); G j + ( c !( n + + ); ( c ?( n + + ) (cid:17) and O = (cid:16)P n + j = ( c ? j + ); ( c ! j + ) (cid:17) .We let R = { ( P i , [ G i ; M ; Q k k N ; O k ] { c } ) | k ∈ N , i = , . . . , n }∪ { ( P i , [( c ! i + ); M ; Q k k N ; O k ] { c } ) | k ∈ N , i = , . . . , n }∪ { ( P i , [ M ; Q k k ( c ! i + ); N ; O k + ] { c } ) | k ∈ N , i = , . . . , n }∪ { ( , [( c !0 + ); M ; Q k k N ; O k ] { c } ) | k ∈ N }∪ { ( , [ M ; Q k k ( c !0 + ); O k ] { c } ) | k ∈ N }∪ { ( , [ Q k k O k ] { c } ) | k ∈ N }∪ { ( , [( c ?( n + + ); Q k k ( c !( n + + ); O k ] { c } ) | k ∈ N } ;and we let R be the symmetrical relation of R . We show that R = R ∪ R is a divergence-preservingbranching bisimulation. We shall only verify the pairs in R in this proof since R is symmetrical.For the set of pairs { ( P i , [ G i ; M ; Q k k N ; O k ] { c } ) | k ∈ N , i = , . . . , n } , note that P i has the followingtransitions: P i a −→ P j if a is a summand of α i j , or P i a −→ if a is a summand of β j .The first transition is simulated by the following transitions:[ G i ; M ; Q k k N ; O k ] { c } a −→ [( c ! j + ); M ; Q k k N ; O k ] { c } τ −→ [ M ; Q k k ( c ! j + ); N ; O k + ] { c } τ −→ [ G j ; M ; Q k + k N ; O k + ] { c } . k ≥
1, then the second transition is simulated by the following transitions:[ G i ; M ; Q k k N ; O k ] { c } a −→ [( c !0 + ); M ; Q k k N ; O k ] { c } τ −→ [ M ; Q k k ( c !0 + ); O k ] { c } τ −→ [ Q k k O k ] { c } τ −→ [( c ?( n + + ); Q k − k ( c !( n + + ); O k − ] { c } τ −→ [ Q k − k O k − ] { c } −→ ∗ ;otherwise, if k =
0, then the second transition are simulated by:[ G i ; M k N ] { c } a −→ [( c !0 + ); M k N ] { c } τ −→ [ M k ( c !0 + )] { c } τ −→ . We have that that ( P j , [( c ! j + ); M ; Q k k N ; O k ] { c } ) ∈ R , ( P j , [ M ; Q k k ( c ! j + ); N ; O k + ] { c } ) ∈ R , ( P j , [ G j ; M ; Q k + k N ; O k + ] { c } ) ∈ R , ( , [( c !0 + ); M ; Q k k N ; O k ] { c } ) ∈ R , ( , [ M ; Q k k ( c !0 + ); O k ] { c } ) ∈ R , ( , [ Q k k O k ] { c } ),( , [( c ?( n + + ); Q k k ( c !( n + + ); O k ] { c } ) ∈ R and ( , ) ∈ R for all k ∈ N and i , j = , . . . , n .One can easily verify that all the other pairs satisfy the condition of branching bisimulation. Therelation R also satisfies the divergence-preserving condition since no infinite τ -transition sequence isallowed from any process defined in R .Therefore, we get a finite specification of every regular process in TCP ♯ modulo ↔ ∆ b . (cid:3) Now we show that a stack can be specified by a regular process and two half counters. We first givean infinite specification in TSP ; of a stack as follows: S ǫ = Σ d ∈D (cid:3) push ? d . S d + pop ! (cid:3) . S ǫ + S d δ = pop ! d . S δ + Σ e ∈D (cid:3) push ? e . S ed δ + . Note that D (cid:3) is a finite set of symbols. We suppose that D (cid:3) contains N symbols (including (cid:3) ).We use ǫ to denote empty sequence. We first define an encoding from sequence of symbols to naturalnumbers ⌈ ⌉ : D (cid:3) ∗ ⇒ N inductively defined as follows: ⌈ ǫ ⌉ = ⌈ d k ⌉ = k ( k = , , . . . , N ) ⌈ d k σ ⌉ = k + N × ⌈ σ ⌉ . os Baeten, BasLuttik &FeiYang 21We define a stack in TCP ♯ as follows: S = [ X ∅ k P k P ] { a , a , b , b , c , c } P j = (( a j ! a + ) ♯ ( b j ! b + ); ( c j ! c + )) ∗ ( j = , X ∅ = ( Σ Nj = (( push ? d j + ); ( a ? a + ) j ; ( b + ); X j ) + pop ! (cid:3) ) ∗ X k = Σ Nj = (( push ? d j + ); Push j ) + ( pop ! d k + ); Pop k ( k = , , . . . , N ) Push k = Shift1to2 ; ( a ? a + ) k ; NShift2to1 ; X k ( k = , , . . . , N ) Pop k = ( a ? a + ) k ; / NShift1to2 ; Test ∅ Shift1to2 = (( a ? a + ); ( a ? a + )) ∗ ; ( c ? c + ); ( b ? b + ) NShift2to1 = (( a ? a + ); ( a ? a + ) N ) ∗ ; ( c ? c + ); ( b ? b + ) / NShift1to2 = (( a ? a + ) N ; ( a ? a + )) ∗ ; ( c ? c + ); ( b ? b + ) Test ∅ = ( a ? a + ); ( a ? a + ); Test + ( c ? c + ); X ∅ Test = ( a ? a + ); ( a ? a + ); Test + ( c ? c + ); X Test = ( a ? a + ); ( a ? a + ); Test + ( c ? c + ); X · · · Test N = ( a ? a + ); ( a ? a + ); Test + ( c ? c + ); X N . We have the following result.
Lemma 5. S ǫ ↔ ∆ b S .Proof.
We define some auxiliary process: P j (0) = (( a j ! a + ) ♯ ( b j ! b + ); ( c j ! c + )) ∗ ( j = , P j ( n ) = ( a j ! a + ) ♯ ( b j ! b + ); ( a j ! a + ) n ; ( c j ! c + )); P j , ( j = , n = , , . . . ) Q j ( n ) = ( a j ! a + ) n ; ( c j ! c + )); P j , ( j = , n ∈ N ) . P and P behave as two half counters.We let R = { ( S ǫ , S ) } ∪ { ( S d j δ , [ X j ; X ǫ k Q ( m ) k P (0)] { a , a , b , b , c , c } ) | j = ⌈ d j ⌉ , m = ⌈ d j δ ⌉ , d ∈ D (cid:3) , δ ∈D ∗ (cid:3) } . We let R be the symmetrical relation of R . We verify that R = R ∪ R ∪ ↔ ∆ b is a divergence-preserving branching bisimulation relation.Note that S ǫ has the following transitions: S ǫ push ? d j −→ S d j for all j = , , . . . , N , and S ǫ pop ! (cid:3) −→ S ǫ . They are simulated by the following transitions: S push ? d j −→ [( a ? a + ) j ; ( b + ); X j ; X ǫ k P (0) k P (0)] { a , a , b , b , c , c } −→ ∗ [( b + ); X j ; X ǫ k P ( j ) k P (0)] { a , a , b , b , c , c } −→ ∗ [ X j ; X ǫ k Q ( j ) k P (0)] { a , a , b , b , c , c } for all j = , , . . . , N , and S pop ! (cid:3) −→ S . S d j , [ X j ; X ǫ k Q ( j ) k P (0)] { a , a , b , b , c , c } ) ∈ R . We denote the sequence of transitions [( a ? a + ) j ; ( b + ); X j ; X ǫ k P (0) k P (0)] { a , a , b , b , c , c } −→ ∗ [ X j ; X ǫ k Q ( j ) k P (0)] { a , a , b , b , c , c } ) ∈ R by s −→ ∗ s m . It is obvious that s ↔ ∆ b . . . s m . Therefore, S push ? d j −→ s , and s ↔ ∆ b s m with ( S d j , s m ) ∈ R .Note that S d j δ has the following transitions: S d j δ push ? d k −→ S d k d j δ for all k = , , . . . , N , and S d j δ pop ! d j −→ S d k δ ′ , where d k δ ′ = δ . They are simulated by the following transitions:[ X j ; X ǫ k Q ( ⌈ d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } push ? d k −→ [ Push k ; X ǫ k Q ( ⌈ d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } −→ ∗ [( a ? a + ) k ; NShift2to1 ; X k ; X ǫ k P (0) k Q ( ⌈ d j δ ⌉ )] { a , a , b , b , c , c } −→ ∗ [ NShift2to1 ; X k ; X ǫ k P ( ⌈ d k ⌉ ) k Q ( ⌈ d j δ ⌉ )] { a , a , b , b , c , c } −→ ∗ [ X k ; X ǫ k Q ( ⌈ d k d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } for all d j , d k ∈ D (cid:3) , δ ∈ D ∗ (cid:3) and[ X j ; X ǫ k Q ( ⌈ d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } pop ! d j −→ [ Pop j ; X ǫ k Q ( ⌈ d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } −→ ∗ [ / NShift1to2 ; Test ∅ ; X ǫ k Q ( ⌈ d j δ ⌉ − k ) k P (0)] { a , a , b , b , c , c } −→ ∗ [ Test ∅ ; X ǫ k P (0) k Q ( ⌈ δ ⌉ )] { a , a , b , b , c , c } −→ ∗ [ X k ; X ǫ k Q ( ⌈ d k δ ′ ⌉ k P (0)] { a , a , b , b , c , c } for all d j ∈ D (cid:3) , δ ∈ D ∗ (cid:3) and δ = d k δ ′ . We have ( S d k d j δ , [ X k ; X ǫ k Q ( ⌈ d k d j δ ⌉ ) k P (0)] { a , a , b , b , c , c } ) ∈ R and ( S d k δ ′ , [ X k ; X ǫ k Q ( ⌈ d k δ ′ ⌉ k P (0)] { a , a , b , b , c , c } ) ∈ R . By using a similar analysis with the previous case, we conclude that R isa bisimulation up to ↔ b . By Lemma 2, we have R ⊆↔ b . Moreover, there is no infinite τ -transitionsequence from any process defined above. Therefore, R ⊆↔ ∆ b .Hence, we have S ǫ ↔ ∆ b S . (cid:3) Next we proceed to define the tape by means of two stacks. We consider the following infinitespecification in TSP ; of a tape: T δ L ˇ d δ R = r ! d . T δ L ˇ d δ R + Σ e ∈D (cid:3) w ? e . T δ L ˇ e δ R + L ? m . T δ L < d δ R + R ? m . T δ L d > δ R + . We define the tape process in TCP ♯ as follows: T = [ T (cid:3) k S k S ] { push , pop , push , pop } T d = r ! d . T d + Σ e ∈D (cid:3) w ? e . T e + L ? m . Left d + R ? m . Right d + ( d ∈ D (cid:3) ) Left d = Σ e ∈D (cid:3) (( pop ? e + ); ( push ! d + ); T e ) Right d = Σ e ∈D (cid:3) (( pop ? e + ); ( push ! d + ); T e ) , where S and S are two stacks with push , pop , push and pop as their interfaces.We establish the following result. Lemma 6. T ˇ (cid:3) ↔ ∆ b T . os Baeten, BasLuttik &FeiYang 23
Proof.
We define the following auxiliary processes: S ( δ ) = [ X , k k Q ( ⌈ δ ⌉ ) k P (0)] { a , a , b , b , c , c } S ( δ ) = [ X , k k Q ( ⌈ δ ⌉ ) k P (0)] { a , a , b , b , c , c } , where δ = d k δ ′ . X , k and X , k is obtained by renaming push and pop in X k to push , pop , push and pop respectively.We use δ to denote the reverse sequence of δ .We verify that R = { ( T δ L ˇ d δ R , [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } ) | d ∈ D (cid:3) , δ L , δ R ∈ D ∗ (cid:3) } ⊆↔ ∆ b . T δ L ˇ d δ R has the following transitions: T δ L ˇ d δ R r ! d −→ T δ L ˇ d δ R T δ L ˇ d δ R w ? e −→ T δ L ˇ e δ R for all e ∈ D (cid:3) T δ L ˇ d δ R L ? m −→ T δ L < d δ R if δ L , ǫ T δ L ˇ d δ R R ? m −→ T δ L d > δ R if δ R , ǫ T δ L ˇ d δ R L ? m −→ T ǫ ˇ (cid:3) d δ R if δ L = ǫ and T δ L ˇ d δ R R ? m −→ T δ L d ˇ (cid:3) ǫ if δ R = ǫ . They are simulated by the following transitions:[ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } r ! d −→ [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } e ? d −→ [ T e k S ( δ L ) k S ( δ R )] { push , pop , push , pop } for all e ∈ D (cid:3) [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } L ? m −→ [ Left d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } −→ ∗ [ T e k S ( δ ′ L ) k S ( d δ R )] { push , pop , push , pop } , δ L = δ ′ L e , if δ L , ǫ [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } R ? m −→ [ Right d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } −→ ∗ [ T e k S ( δ L d ) k S ( δ ′ R )] { push , pop , push , pop } , δ R = e δ R , if δ R , ǫ [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } L ? m −→ [ Left d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } −→ ∗ [ T (cid:3) k S ( ǫ ) k S ( d δ R )] { push , pop , push , pop } , if δ L = ǫ [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } R ? m −→ [ Right d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } −→ ∗ [ T (cid:3) k S ( δ L d ) k S ( ǫ )] { push , pop , push , pop } , if δ R = ǫ . We have ( T δ L ˇ d δ R , [ T d k S ( δ L ) k S ( δ R )] { push , pop , push , pop } ) ∈ R , ( T δ L ˇ e δ R , [ T e k S ( δ L ) k S ( δ R )] { push , pop , push , pop } ) ∈ R , ( T δ L < d δ R , [ T e k S ( δ ′ L ) k S ( d δ R )] { push , pop , push , pop } ) ∈ R , ( T δ L d > δ R , [ T e k S ( δ L d ) k S ( δ ′ R )] { push , pop , push , pop } ) ∈ R , ( T ǫ ˇ (cid:3) d δ R , [ T (cid:3) k S ( ǫ ) k S ( d δ R )] { push , pop , push , pop } ) ∈ R , and( T δ L d ˇ (cid:3) ǫ , [ T (cid:3) k S ( δ L d ) k S ( ǫ )] { push , pop , push , pop } ) ∈ R . R is a bisimulation up to ↔ b . Therefore, R ⊂↔ b .Moreover, there is no infinite τ -transition sequence from the processes defined above. Therefore, R ⊆↔ ∆ b .Hence, we have T ˇ (cid:3) ↔ ∆ b T . (cid:3) Finally, we construct a finite control process for an RTM M = ( S M , −→ M , ↑ M , ↓ M ) as follows: C s , d = Σ ( s , d , a , e , M , t ) ∈−→ M ( a . w ! e . M ! m . Σ f ∈D (cid:3) r ? f . C t , f )[ + ] s ↓ M ( s ∈ S M , d ∈ D (cid:3) ) . We prove the following lemma.
Lemma 7. T ( M ) ↔ ∆ b [ C ↑ M , (cid:3) k T ] { r , w , L , R } .Proof. By the proof of Theorem 1, ↔ ∆ b is compatible with parallel composition. Therefore, it is enoughto show that T ( M ) ↔ ∆ b [ C ↑ , (cid:3) k T ˇ (cid:3) ] { r , w , L , R } .We define a binary relation R by: R = { (( s , δ L ˇ d δ R ) , [ C s , d k T δ L ˇ d δ R ] { r , w , L , R } ) | s ∈ S M , δ L , δ R ∈ D ∗ (cid:3) , d ∈ D (cid:3) }∪ { (( s , δ L < d δ R ) , [ C s , f k T δ L < d δ R ] { r , w , L , R } ) | s ∈ S M , δ L , δ R ∈ D ∗ (cid:3) , d ∈ D (cid:3) , δ L , ǫ, δ L = δ ′ L f }∪ { (( s , δ L d > δ R ) , [ C s , f k T δ L d > δ R ] { r , w , L , R } ) | s ∈ S M , δ L , δ R ∈ D ∗ (cid:3) , d ∈ D (cid:3) , δ R , ǫ, δ R = f δ ′ R }∪ { (( s , ˇ (cid:3) δ R ) , [ C s , (cid:3) k T ˇ (cid:3) δ R ] { r , w , L , R } ) | s ∈ S M , δ R ∈ D ∗ (cid:3) }∪ { (( s , δ L ˇ (cid:3) ) , [ C s , (cid:3) k T δ L ˇ (cid:3) ] { r , w , L , R } ) | s ∈ S M , δ L ∈ D ∗ (cid:3) } . We show that
R ⊆↔ ∆ b .( s , δ L ˇ d δ R ) has the following transitions:( s , δ L ˇ d δ R ) a −→ ( t , δ L < e δ R ) if ( s , d , a , e , L , t ) ∈−→ M , δ L , ǫ ( s , δ L ˇ d δ R ) a −→ ( t , δ L e > δ R ) if ( s , d , a , e , R , t ) ∈−→ M , δ R , ǫ ( s , δ L ˇ d δ R ) a −→ ( t , ˇ (cid:3) e δ R ) if ( s , d , a , e , L , t ) ∈−→ M , δ L = ǫ ( s , δ L ˇ d δ R ) a −→ ( t , δ L e ˇ (cid:3) ) if ( s , d , a , e , R , t ) ∈−→ M , δ R = ǫ . They are simulated by:[ C s , d k T δ L ˇ d δ R ] { r , w , L , R } a −→ [ w ! e . L ! m . Σ f ∈D (cid:3) r ? f . C t , f k T δ L ˇ d δ R ] { r , w , L , R } −→ ∗ [ C t , f k T δ L < d δ R ] { r , w , L , R } , if ( s , d , a , e , L , t ) ∈−→ M , δ L , ǫ, δ L = δ ′ L f [ C s , d k T δ L ˇ d δ R ] { r , w , L , R } a −→ [ w ! e . R ! m . Σ f ∈D (cid:3) r ? f . C t , f k T δ L ˇ d δ R ] { r , w , L , R } −→ ∗ [ C t , f k T δ L d > δ R ] { r , w , L , R } , if ( s , d , a , e , R , t ) ∈−→ M , δ R , ǫ, δ R = f δ ′ R [ C s , d k T δ L ˇ d δ R ] { r , w , L , R } a −→ [ w ! e . L ! m . Σ f ∈D (cid:3) r ? f . C t , f k T δ L ˇ d δ R ] { r , w , L , R } −→ ∗ [ C t , (cid:3) k T ˇ (cid:3) d δ R ] { r , w , L , R } , if ( s , d , a , e , L , t ) ∈−→ M , δ L = ǫ [ C s , d k T δ L ˇ d δ R ] { r , w , L , R } a −→ [ w ! e . R ! m . Σ f ∈D (cid:3) r ? f . C t , f k T δ L ˇ d δ R ] { r , w , L , R } −→ ∗ [ C t , (cid:3) k T δ L d ˇ (cid:3) ] { r , w , L , R } , if ( s , d , a , e , L , t ) ∈−→ M , δ R = ǫ . We apply similar analysis to other pairs in R . Using the proof strategy similar to Lemma 5, it isstraightforward show that R is a bisimulation up to ↔ b . Hence, we have R ⊂↔ b . Moreover, using asimilar strategy in the proof showing a π -calculus is reactively Turing powerful [ ? ], we can show that R os Baeten, BasLuttik &FeiYang 25satisfies the divergence-preserving condition. For every infinite τ -transition sequence in T ( M ), we canfind an infinite τ -transition sequence in the transition system induced from [ C ↑ M , (cid:3) k T ] { r , w , L , R } . Therefore, R ⊂↔ ∆ b .Hence, we have T ( M ) ↔ ∆ b [ C ↑ M , (cid:3) k T ] { r , w , L , R } . (cid:3) We have the following theorem.
Theorem 3.
TCP ♯ is reactively Turing powerful modulo ↔ ∆ b . In this paper we have proposed a revised operational semantics of the sequential composition operatorin the presence of intermediate termination. We established two results which is still unsolved withthe standard version of the sequential composition operator. We first proved that, with the revised se-mantics, every context-free process corresponds to a pushdown process modulo strong bisimilarity. Wealso proved that, TCP ♯ is a reactively Turing powerful process calculi modulo divergence-preservingbranching bisimilarity.There are still some negative premise in the revised sequential composition operator. For instance,unguarded recursion causes problems. Consider the following process: P = P ; P + P = a . . According to our operational semantics, no transition is allowed from P . If we replace ; by · , P wouldbe able to do an a -labelled transition resulting in infinitely many distinct states. We do not have a perfectsolution on dealing with transitions from processes specified with unguarded recursions yet.Moreover, the congruence property only holds on the rooted divergence-preserving branching bisim-ilarity. It fails for the rooted divergence-insensitive version of branching bisimilarity on TCP ; . Considerthe following processes: P = τ. P = ( τ. ) ∗ Q = a . . We have P ↔ rb P but not P ; Q ↔ rb P ; Q . Since the first process have an a -labelled transition followedby a τ -labelled transition, but the second process only have τ -labelled transitions.Moreover, the standard semantics is designed to satisfy the axiom ( x + y ) · z = x · z + y · z . How-ever, it is no longer valid in the revised semantics. For instance, ( a + ); b is no longer equivalent to a ; b + ; b modulo any behavioural equivalence. In the future work, we shall provide a sound and com-plete axiomatisation for the process calculus TCP ; with respect to strong bisimilarity as well as rooteddivergence-preserving branching bisimilarity.Another interesting future work is to establish reactive Turing powerfulness on other process calculiwith non-regular iterators based on the revised semantics of the sequential composition operator. Forinstance, we could consider the pushdown operator “ ♯ ” and the back-and-forth operator “ ⇆ ” introducedby Bergstra and Ponse in [6]. They are defined by the following equations: P P = P ; ( P P ); ( P P ) + P P ⇆ P = P ; ( P ⇆ P ); P + P . By analogy to the nesting operator, we shall also give a proper operational semantics, and then use thecalculus obtained by the revised semantics to define other versions of terminating counters.6 Sequential Composition inthe Presence ofIntermediate Termination
References [1] Jos Baeten, Twan Basten & Michel Reniers (2010):
Process algebra: equational theories of communicatingprocesses . 50, Cambridge university press.[2] Jos Baeten, Pieter Cuijpers, Bas Luttik & Paul van Tilburg (2009):
A process-theoretic look at automata . In:InternationalConferenceonFundamentalsofSoftwareEngineering, Springer, pp. 1–33.[3] Jos Baeten, Pieter Cuijpers & Paul van Tilburg (2008):
A context-free process as a pushdown automaton . In:InternationalConferenceonConcurrencyTheory, Springer, pp. 98–113.[4] Jos Baeten, Bas Luttik & Paul van Tilburg (2013):
Reactive Turing Machines . Inform. Comput. 231, pp.143–166, doi: .[5] Jan Bergstra, Inge Bethke & Alban Ponse (1994):
Process algebra with iteration and nesting . TheComputerJournal37(4), pp. 243–258.[6] Jan Bergstra & Alban Ponse (2001):
Non-regular iterators in process algebra . TheoreticalComputerScience269(1), pp. 203–229.[7] Wan Fokkink, Rob van Glabbeek & Bas Luttik (2017):
Divide and Congruence III: Stability & Divergence .In: InternationalConferenceonConcurrencyTheory.[8] Rob van Glabbeek (1993):
The linear time branching time spectrum II . In: CONCUR’93, Springer, pp.66–81, doi: .[9] Rob van Glabbeek, Bas Luttik & Nikola Trˇcka (2009):
Branching bisimilarity with explicit divergence .FundamentaInformaticae93(4), pp. 371–392.[10] Bas Luttik & Fei Yang (2015):
Executable Behaviour and the π -Calculus (extended abstract) . In: Proceed-ings8thInteractionandConcurrencyExperience,ICE2015,Grenoble,France,4-5thJune2015., pp. 37–52,doi: . Available at http://dx.doi.org/10.4204/EPTCS.189.5 .[11] Bas Luttik & Fei Yang (2016): On the Executability of Interactive Computation . In Arnold Beckmann,Laurent Bienvenu & Natasa Jonoska, editors: Pursuit of the Universal - 12th Conferenceon ComputabilityinEurope,CiE2016,Paris,France,June27-July1,2016,Proceedings, LectureNotesinComputerScience9709, Springer, pp. 312–322.[12] Robin Milner (1989):
Communication and concurrency . 84, Prentice hall New York etc.[13] David Park (1981):
Concurrency and automata on infinite sequences . In: Theoretical computer science,Springer, pp. 167–183.[14] Paul van Tilburg (2011):
From computability to executability: a process-theoretic view on automata theory .[15] Alan Turing (1936):