Synchrony vs Causality in the Asynchronous Pi-Calculus
BB. Luttik and F. D. Valencia (Eds.): 18th International Workshop onExpressiveness in Concurrency (EXPRESS 2011)EPTCS 64, 2011, pp. 89–103, doi:10.4204/EPTCS.64.7 c (cid:13)
K. Peters, J.-W. Schicke & U. NestmannThis work is licensed under theCreative Commons Attribution License.
Synchrony vs Causalityin the Asynchronous Pi-Calculus ∗ Kirstin Peters
School of EECS, TU Berlin, Germany [email protected]
Jens-Wolfhard Schicke
Institute for Programming and Reactive Systems, TU Braunschweig, Germany [email protected]
Uwe Nestmann
School of EECS, TU Berlin, Germany [email protected]
We study the relation between process calculi that differ in their either synchronous or asynchronousinteraction mechanism. Concretely, we are interested in the conditions under which synchronousinteraction can be implemented using just asynchronous interactions in the π -calculus. We assumea number of minimal conditions referring to the work of Gorla: a “good” encoding must be com-positional and preserve and reflect computations, deadlocks, divergence, and success. Under theseconditions, we show that it is not possible to encode synchronous interactions without introducingadditional causal dependencies in the translation. Keywords: asynchrony, distributed systems, causality, pi-calculus
We study the relation between process calculi that differ in their either synchronous or asynchronousinteraction mechanism. Synchronous and asynchronous interactions are the two basic paradigms ofinteractions in distributed systems. While synchronous interactions are widely used in specificationlanguages, asynchronous interactions are often better suited to implement real systems. We are interestedin the conditions under which synchronous interactions can be implemented using just asynchronousinteractions, i.e., in the conditions under which it is possible to encode the synchronous π -calculus intoits asynchronous variant. To partially answer this question, we examine the role of causality for encodingsynchrony.Of course, we are not interested in trivial or meaningless encodings. Instead we consider only thoseencodings that ensure that the original term and its encoding show to some extent the same abstract be-haviour. Unfortunately, there is no consensus about what properties make an encoding “good” (comparee.g. [12]). Instead, we find separation results as well as encodability results with respect to very differentconditions, which naturally leads to incomparable results. Among these conditions, a widely used crite-rion is full abstraction , i.e. the preservation and reflection of equivalences associated to the two comparedlanguages. There are lots of different equivalences in the range of π -calculus variants. Since full abstrac-tion depends, by definition, strongly on the chosen equivalences, a variation in the respective choice may ∗ This work was supported by the DFG (German Research Foundation), grants NE-1505/2-1 and GO-671/6-1. Synchrony vs Causality in the Asynchronous Pi-Calculus change an encodability result into a separation result, or vice versa. Unfortunately, there is neither acommon agreement about what kinds of equivalence are well suited for language comparison—again,the results are often incomparable. To overcome these problems, and to form a more robust and uniformapproach for language comparison, Gorla [5, 6] identifies five criteria as being well suited for separationas well as encodability results. In this paper, we rely on these five criteria. Compositionality and nameinvariance stipulate structural conditions on a good encoding. Operational correspondence requires thata good encoding preserves and reflects the computations of a source term. Divergence reflection statesthat a good encoding shall not exhibit divergent behaviour, unless it was already present in the sourceterm. Finally, success sensitiveness requires that a source term and its encoding have exactly the samepotential to reach successful state.A discussion on synchrony versus asynchrony cannot be separated from a discussion of choice. Whenprocesses communicate via message-passing along channels, they do not only listen to one channel at atime—they concurrently listen to a whole selection of channels. Choice operators just make this naturalintuition explicit; moreover, their mutual exclusion property allows us to concisely describe the particulareffect of message-passing actions on the process’s local state. Asynchronous send actions make no senseas part of a mutually exclusive selection, as they cannot be prevented from happening. Consequently,the asynchronous calculus only offers input-guarded choice. In contrast, synchronous send actions alsoallow for the definition of mixed choice: selections of both input and output actions.It is well known that there is a good encoding from the choice-free synchronous π -calculus into itsasynchronous variant [2, 8, 7]. It is also well-known [11, 6, 13] that there is no good encoding fromthe full π -calculus—the synchronous π -calculus including mixed choice—into its asynchronous variantif the encoding translates the parallel operator homomorphically. Palamidessi was the first to point outthat mixed choice strictly raises the absolute expressive power of the synchronous π -calculus comparedto its asynchronous variant. Analysing this result [13], we observe that it boils down to the fact that thefull π -calculus can break merely syntactic symmetries, where its asynchronous variant can not; thereis no need to refer to a semantic problem like the existence of solutions to leader election. Moreover,as already Gorla [6] states, the condition of homomorphic translation of the parallel operator is ratherstrict. Therefore, Gorla proposes the weaker criterion of compositional translation of the source languageoperators (see Definition 2.5 at page 94). As claimed in [14], this weakening of the structural conditionon the encoding of the parallel operator turns the separation result into an encodability result, i.e., thereis an encoding from the synchronous π -calculus (including mixed choice) into its asynchronous variantwith respect to the criteria of Gorla . Analysing the encoding attempt given in [14], we observe that itintroduces additional causal dependencies, i.e., causal dependencies that were not present in the sourceterm and thus introduced by the encoding function. Note, that a step B is considered causally dependenton a previous step A , if B depends on the availability of data produced by A . In this paper, we show thatthis is a general phenomenon of encoding synchrony.Thus, as the main contribution of this paper, we show that—in the asynchronous π -calculus—there isa strong connection between synchronous interactions and causal dependencies. More precisely, we showthat it is not possible to encode synchronous interactions within a completely asynchronous frameworkwithout introducing additional causal dependencies in the translation. Moreover, we discuss the roleof mixed choice to derive this result for the π -calculus. The companion paper [17] presents a similarresult in the context of Petri nets. Hence, this connection between synchronous interactions and causaldependencies is presumably no effect of the representation of concurrent systems in either the π -calculus Note that this encoding is neither prompt nor is the assumed equivalence (cid:16) strict, so the separation results of [5] and [6]do not apply here. . Peters, J.-W. Schicke & U. Nestmann
Overview of the Paper. In §
2, we introduce the synchronous π -calculus and its asynchronous variant.We revisit some notions and results of [13], recall the five criteria of [6] to measure the quality of anencoding, and talk about a definition of causality. In §
3, we present our separation result and discusssome observations on its proof. We conclude in § π -calculus Our source language is the monadic π -calculus as described for instance in [16]. Since the main reasonfor the absolute difference in the expressiveness of the full π -calculus compared to the asynchronous π -calculus is the power of mixed choice we denote the full π -calculus also by π mix .Let N denote a countably infinite set of names and N the set of co-names, i.e., N = { n | n ∈ N } .We use lower case letters a , a (cid:48) , a , . . . , x , y , . . . to range over names. Definition 2.1 ( π mix ) . The set of process terms of the π -calculus (with mixed choice) , denoted by P mix ,is given by P :: = ( ν n ) P | P | P | ! P | ∑ i ∈ I π i . P i where π :: = y ( x ) | y (cid:104) z (cid:105) for some names n , x , y , z ∈ N and a finite index set I .The interpretation of the defined process terms is as usual. Since all examples and counterexampleswithin this paper are CCS-like we omit the objects of actions. Moreover we denote the empty sum with and omit it in continuations. As usual we often notate a sum ∑ i ∈{ i ,..., i n } π i . P i by π i . P i + . . . + π i n . P i n .As target language, we use π a , the asynchronous π -calculus (see [8] or [2]). Definition 2.2 ( π a ) . The set of process terms of the asynchronous π -calculus , denoted by P a , is givenby P :: = ( ν n ) P | P | P | ! P | | y (cid:104) z (cid:105) | y ( x ) . P | [ a = b ] P for some names n , a , b , x , y , z ∈ N .Here, we equip the target language with the match operator, because it is used in [14], which intro-duces the only good encoding that we are aware of between the synchronous π -calculus (with mixedchoice) and its asynchronous variant. With [14], this encoding depends on the availability of the matchoperator in the target language; we do not yet know whether there is such an encoding without match.Since matching do increase the expressive power of the asynchronous π -calculus (see [3]), answeringthis question is an important task for future work. However, note that the proof of our main result inTheorem 3.8 does not depend on this decision.As shown by the encoding in [10] one could also use separate choice within an asynchronous variantof the calculus without a significant effect on its expressive power. We claim, that our main result doesnot depend on the decision whether to allow separate choice or not. In Section 3.1 we give some hintson how to change the proof to capture separate choice in the target language.We use capital letters P , P (cid:48) , P , . . . , Q , R , . . . to range over processes. If we refer to processes withoutfurther requirements, we denote elements of P mix ; we sometimes use just P when the discussion applies2 Synchrony vs Causality in the Asynchronous Pi-Calculus P ≡ Q if Q can be obtained from P by renaming one or more of the bound names in P,silently avoiding name clashes P | ≡ P P | Q ≡ Q | P P | ( Q | R ) ≡ ( P | Q ) | R [ a = a ] P ≡ P ! P ≡ P | ! P ( ν n ) ≡ ( ν n ) ( ν m ) P ≡ ( ν m ) ( ν n ) P P | ( ν n ) Q ≡ ( ν n ) ( P | Q ) if n / ∈ fn ( P ) Figure 1: Structural Congruence.to both calculi. Let fn ( P ) denote the set of free names in P . Let bn ( P ) denote the set of bound names in P . Likewise, n ( P ) denotes the set of all names occurring in P . Their definitions are completely standard.The reduction semantics of π mix and π a are jointly given by the transition rules in Figure 2, where structural congruence , denoted by ≡ , is given by the rules in Figure 1. Note that the rule C OM a for com-munication in π a is a simplified version of the rule C OM mix for communication in π mix . The differencesbetween these two rules result from the differences in the syntax, i.e. the lack of choice and the fact thatonly input can be used as guard in π a . As usual, we use ≡ α if we refer to alpha-conversion (the first ruleof Figure 1) only. C OM mix ( . . . + y ( x ) . P + . . . ) | ( . . . + y (cid:104) z (cid:105) . Q + . . . ) (cid:55)−→ { z / x } P | Q C OM a y ( x ) . P | y (cid:104) z (cid:105) (cid:55)−→ { z / x } P P AR P (cid:55)−→ P (cid:48) P | Q (cid:55)−→ P (cid:48) | Q R ES P (cid:55)−→ P (cid:48) ( ν n ) P (cid:55)−→ ( ν n ) P (cid:48) C ONG P ≡ P (cid:48) P (cid:48) (cid:55)−→ Q (cid:48) Q (cid:48) ≡ QP (cid:55)−→ Q Figure 2: Reduction Semantics of π mix and π a .We use σ , σ (cid:48) , σ , . . . to range over substitutions. A substitution is a mapping { x / y , . . . , x n / y n } fromnames to names. The application of a substitution on a term { x / y , . . . , x n / y n } ( P ) is defined as the resultof simultaneously replacing all free occurrences of y i by x i for i ∈ { , . . . , n } , possibly applying alpha-conversion to avoid capture or name clashes. For all names N \ { y , . . . , y n } the substitution behavesas the identity mapping. Let id denote identity, i.e. id is the empty substitution.Let P (cid:55)−→ ( P (cid:54)(cid:55)−→ ) denote existence (non-existence) of a step from P , i.e. there is (no) P (cid:48) ∈ P suchthat P (cid:55)−→ P (cid:48) . Moreover, let (cid:90) = ⇒ be the reflexive and transitive closure of (cid:55)−→ and let (cid:55)−→ ω define aninfinite sequence of reduction steps.The first quality criteria presented in Section 2.3 is compositionality. It induces the definition of a . Peters, J.-W. Schicke & U. Nestmann π mix . A context C ([ · ] , . . . , [ · ] n ) is simplya π -term, i.e. a π a -term in case of Definition 2.5, with n holes. Putting some π a -terms P , . . . , P n in thisorder into the holes [ · ] , . . . , [ · ] n of the context, respectively, gives a term denoted C ( P , . . . , P n ) . Notethat a context may bind some free names of P , . . . , P n . The arity of a context is the number of its holes. π -calculus A network of degree n is a process ( ν ˜ x ) ( P | . . . | P n ) for some n ∈ N , some P , . . . , P n ∈ P and a sequenceof names ˜ x . We refer to P , . . . , P n as the processes of the network. Note that the processes of a networkcan be networks itself. A symmetric network of degree n is a network of degree n such that P i = σ i − ( P ) for all i ∈ { , . . . , n } and some substitution σ , called symmetry relation , such that σ n = id. The minimaldegree of a symmetry relation σ is the smallest n > σ n = id. A symmetric execution is anexecution starting at a symmetric network of degree n , returning to a symmetric network of the samedegree after any n ’th step, and which is either infinite or terminates in a symmetric network of degree n . Let π sep be the subcalculus of π mix with separate but no mixed choice, i.e. there is no sum with bothinput- and output-guarded summands, and let P sep be its set of processes. The first and the last author[13] prove that it is not possible in π sep to break the symmetry of a symmetric network. More precisely,in Theorem 4.4 in [13], it is shown that any symmetric network in P sep has at least one symmetricexecution. Since P a is a subset of P sep , we obtain the following lemma. Lemma 2.3.
Every symmetric network in P a has at least one symmetric execution. Moreover, by Lemma 5.4 in [13], we know that if the minimal degree of the symmetry relation issmaller than the degree of the symmetric network, then not only the network can be subdivided into anetwork of symmetric networks but also its symmetric execution. As already done in [13], we use thisLemma in the context of a symmetric network P | P for some arbitrary P ∈ P a . P | P is a symmetricnetwork of degree 2 with the symmetry relation id. The minimal degree of id is 1. Let P | P (cid:90) = ⇒ ( ν ˜ x ) ( P (cid:48) | σ ( P (cid:48) )) be a symmetric execution of P | P for some P (cid:48) ∈ P a , a sequence of names ˜ x , and somesymmetry relation σ with σ = id. By Lemma 5.4 in [13], this symmetric execution can be subdividedsuch that there is an execution P (cid:90) = ⇒ ( ν ˜ x (cid:48) ) P (cid:48) for some sequence of names ˜ x (cid:48) . Lemma 2.4.
Let P , P (cid:48) ∈ P a , ˜ x be a sequence of names, and σ be a symmetry relation with σ = id .Every symmetric execution P | P (cid:90) = ⇒ ( ν ˜ x ) ( P (cid:48) | σ ( P (cid:48) )) can be subdivided such that P (cid:90) = ⇒ ( ν ˜ x (cid:48) ) P (cid:48) forsome sequence of names ˜ x (cid:48) . Gorla presented in [6] a small framework of five criteria well suited for language comparison. We use thisfive criteria to measure the quality of an encoding (cid:74) · (cid:75) from π mix into π a , i.e. an encoding (cid:74) · (cid:75) is “good”if it fulfils the five criteria proposed by Gorla. Note that for the definition of these criteria a behaviouralequivalence (cid:16) on the target language is assumed. Its purpose is to describe the abstract behaviour of atarget process, where abstract basically means with respect to the behaviour of the source term.The five conditions are divided into two structural and three semantic criteria. The structural criteriainclude (1) compositionality and (2) name invariance . The semantic criteria include (3) operational cor-respondence , (4) divergence reflection and (5) success sensitiveness . In the following we use S , S (cid:48) , S , . . . to range over terms of the source language and T , T (cid:48) , T , . . . to range over terms of the target language.Intuitively, an encoding is compositional if the translation of an operator depends only on the trans-lation of its parameters. To mediate between the translations of the parameters the encoding defines a4 Synchrony vs Causality in the Asynchronous Pi-Calculus unique context for each operator, whose arity is the arity of the operator. Moreover, the context can beparametrised on the free names of the corresponding source term. Note that our result is independent ofthis parametrisation.
Definition 2.5 (Criterion 1: Compositionality) . The encoding (cid:74) · (cid:75) is compositional if, for every k-aryoperator op of π mix and for every subset of names N , there exists a k-ary context C N op ([ · ] , . . . , [ · ] k ) suchthat, for all S , . . . , S k with fn ( S ) ∪ . . . ∪ fn ( S k ) = N , it holds that (cid:74) op ( S , . . . , S k ) (cid:75) = C N op ( (cid:74) S (cid:75) , . . . , (cid:74) S k (cid:75) ) . The second structural criterion states that the encoding should not depend on specific names usedin the source term. Of course, an encoding that translates each name to itself simply preserves thiscondition. However, it is sometimes necessary and meaningful to translate a name into a sequence ofnames or to reserve a couple of names for the encoding, i.e. to give them a special function within theencoding. To ensure that there are no conflicts between the names used by the encoding function forspecial purposes and the source term names, the encoding is enriched with a renaming policy ϕ (cid:74) (cid:75) , i.e.,a substitution from names into sequences of names . Based on such a renaming policy an encoding isindependent of specific names if it preserves all substitutions σ on source terms by a substitution σ (cid:48) ontarget terms such that σ (cid:48) respects the changes made by the renaming policy. Definition 2.6 (Criterion 2: Name Invariance) . The encoding (cid:74) · (cid:75) is name invariant if, for every S and σ , it holds that (cid:74) σ ( S ) (cid:75) (cid:40) ≡ α σ (cid:48) ( (cid:74) S (cid:75) ) if σ is injective (cid:16) σ (cid:48) ( (cid:74) S (cid:75) ) otherwisewhere σ (cid:48) is such that ϕ (cid:74) (cid:75) ( σ ( a )) = σ (cid:48) (cid:0) ϕ (cid:74) (cid:75) ( a ) (cid:1) for every a ∈ N .The first semantic criterion is operational correspondence, which consists of a soundness and a com-pleteness condition. Completeness requires that every computation of a source term can be simulatedby its translation, i.e., the translation does not reduce the computations of the source term.
Soundness requires that every computation of a target term corresponds to some computation of the correspondingsource term, i.e., the translation does not introduce new computations.
Definition 2.7 (Criterion 3: Operational Correspondence) . The encoding (cid:74) · (cid:75) is operationally corre-sponding if it is Complete : for all S (cid:90) = ⇒ S (cid:48) , it holds that (cid:74) S (cid:75) (cid:90) = ⇒(cid:16) (cid:74) S (cid:48) (cid:75) ; Sound : for all (cid:74) S (cid:75) (cid:90) = ⇒ T , there exists an S (cid:48) such that S (cid:90) = ⇒ S (cid:48) and T (cid:90) = ⇒(cid:16) (cid:74) S (cid:48) (cid:75) .Note that the Definition of operational correspondence relies on the equivalence (cid:16) to get rid of junkpossibly left over within computations of target terms. Sometimes, we refer to the completeness criterionof operational correspondence as operational completeness and, accordingly, for the soundness criterionas operational soundness.The next criterion concerns the role of infinite computations in encodings. Definition 2.8 (Criterion 4: Divergence Reflection) . The encoding (cid:74) · (cid:75) reflects divergence if, for every S , (cid:74) S (cid:75) (cid:55)−→ ω implies S (cid:55)−→ ω . To keep distinct names distinct Gorla assumes that ∀ n , m ∈ N . n (cid:54) = m implies ϕ (cid:74) (cid:75) ( n ) ∩ ϕ (cid:74) (cid:75) ( m ) = /0, where ϕ (cid:74) (cid:75) ( x ) issimply considered as set here. . Peters, J.-W. Schicke & U. Nestmann success operator (cid:88) to be part of the syntax of both the source and the target language.Likewise, we add (cid:88) to the syntax of π mix in Definition 2.1 and of π a in Definition 2.2. Since (cid:88) cannot be further reduced, the operational semantics is left unchanged in both cases. Moreover, note that n ( (cid:88) ) = fn ( (cid:88) ) = bn ( (cid:88) ) = /0, so also interplay of (cid:88) with the ≡ -rules is smooth and does not requireexplicit treatment. The test for reachability of success is standard. Definition 2.9 (Success) . A process P ∈ P may lead to success , denoted as P ⇓ , if (and only if) itis reducible to a process containing a top-level unguarded occurrence of (cid:88) , i.e. ∃ P (cid:48) , P (cid:48)(cid:48) ∈ P . P (cid:90) = ⇒ P (cid:48) ∧ P (cid:48) ≡ P (cid:48)(cid:48) | (cid:88) .Note that we choose may-testing here. However, as we claim, our main result in Theorem 3.8 holds formust-testing, as well.Finally, an encoding preserves the behaviour of the source term if it and its corresponding target termanswer the tests for success in exactly the same way. Definition 2.10 (Criterion 5: Success Sensitiveness) . The encoding (cid:74) · (cid:75) is success sensitive if, for every S , S ⇓ if and only if (cid:74) S (cid:75) ⇓ .Note that this criterion only links the behaviours of source terms and their literal translations but notof their continuations. To do so, Gorla relates success sensitiveness and operational correspondence byrequiring that the equivalence on the target language never relates two processes P and Q such that P ⇓ and Q (cid:54)⇓ . Definition 2.11 (Success Respecting) . (cid:16) ⊆ P a × P a is success respecting if, for every P and Q with P ⇓ and Q (cid:54)⇓ , it holds that P (cid:54)(cid:16) Q . Analysing the five criteria of the last section, we observe that there are two structural criteria to ensure thata good encoding is implementable, i.e., is of practical interest, and there are three criteria to ensure thatthe encoding preserves and reflects the main behaviour of source terms. However, there is no criterionrequiring the preservation or reflection of causal dependencies.For the π -calculus usually two kinds of causal dependencies are distinguished (see [15, 1]). Thefirst one, called structural or subject dependencies, originates from the nesting of prefixes, i.e., fromthe structure of processes. A typical example of such a dependency is given by ( ν b ) (cid:0) a . b | b . c (cid:1) | a | c (cid:55)−→ ( ν b ) (cid:0) b | b . c (cid:1) | c (cid:55)−→ c | c (cid:55)−→ . The second step on channel b is causally dependent on the firststep, because it unguards b . So b is causally dependent on a . Similarly, c is causally dependent on b , and by transitivity c is causally dependent on a . The other kind of dependencies are called link orobject dependencies and originate from the binding mechanisms on names. Here a typical example is ( ν x ) ( y (cid:104) x (cid:105) | x ) . In a labelled semantics the output on x is causally dependent on the extrusion of x by anoutput on y , i.e. x is causally dependent on y .We observe that causal dependencies are defined as a condition between actions or names of actions.In the context of encodings this view is problematic, because steps are often translated into sequences ofsteps and names may be translated into sequences of names. Moreover a sequence of steps simulatinga single source term step may be interleaved with another such sequence or some target term steps usedto prepare the simulation of another source term step, whose simulation may never be completed. So,what precisely does it mean for an encoding to preserve or respect causal dependencies? If source termnames are translated into sequences of names should one consider the causal dependencies between all6 Synchrony vs Causality in the Asynchronous Pi-Calculus such translated names or only between some of them? Moreover how should an encoding handle namesreserved for some special purposes of the encoding function, i.e., target term names that do not resultfrom the translation of a source term name?We have no final answer to these questions yet. However, in the next section we prove a separationresult, which does not require a thorough answer to the questions above. Instead, we use a definitionof causal dependencies that is based only on direct subject dependencies. So within this paper, a step B is considered causally dependent on a previous step A , if B depends on the availability of a capabilityproduced by A . More precisely, step B is causally dependent on step A , if A unguards some capability,i.e., some input or output prefix, which is consumed by step B . An encoding preserves causal depen-dencies, if for any causal dependency between two steps of the source term there is a causal dependencybetween some steps of their simulations, and an encoding reflects causal dependencies, if for any causaldependency between two steps of different, completed simulations there is a causal dependency of thecorresponding source term steps. In this section, we show that any good encoding from π mix into π a introduces causal dependencies, i.e.,is not causality respecting. To prove our main result we analyse the context introduced to encode the parallel operator and examinehow this context has to interact with the encodings of its parameters to allow for a simulation of a sourceterm step. We start with some observations concerning the three process terms P (cid:44) a + a | b + b . (cid:88) , Q (cid:44) P | P , and R (cid:44) a + b + b . (cid:88) | b + a + a . (cid:88) , which are used in the following lemmata as counterexamples. Note that we choose Q , and R suchthat each of them is a symmetric network of degree 2 with either σ = { a / b , b / a } or id as symmetryrelation. Moreover to fix the context used to encode the parallel operator we choose P , Q , and R such that fn ( P ) = fn ( Q ) = fn ( R ) = { a , b } . Hence by compositionality for each of these three terms the outermostparallel operator is translated by exactly the same context C { a , b }| ([ · ] , [ · ] ) . Observation 3.1.
There exists a context C { a , b }| ([ · ] , [ · ] ) such that (cid:74) P (cid:75) = C { a , b }| (cid:0) (cid:74) a + a (cid:75) , (cid:74) b + b . (cid:88) (cid:75) (cid:1) , (cid:74) Q (cid:75) = C { a , b }| ( (cid:74) P (cid:75) , (cid:74) P (cid:75) ) , and (cid:74) R (cid:75) = C { a , b }| (cid:0) (cid:74) a + b + b . (cid:88) (cid:75) , (cid:74) b + a + a . (cid:88) (cid:75) (cid:1) .We choose P such that none of its executions lead to success, i.e., P (cid:54)(cid:55)−→ and P (cid:54)⇓ . P (cid:54)(cid:55)−→ implies, byoperational soundness, that (cid:74) P (cid:75) can not perform a step that changes its state modulo (cid:16) , i.e. (cid:74) P (cid:75) (cid:90) = ⇒ T P implies T P (cid:90) = ⇒(cid:16) (cid:74) P (cid:75) for all T P ∈ P a . By success sensitiveness, P (cid:54)⇓ implies (cid:74) P (cid:75) (cid:54)⇓ . Because of thatand since (cid:16) is success respecting we have T P (cid:54)⇓ for all T P ∈ P a such that (cid:74) P (cid:75) (cid:90) = ⇒ T P . Observation 3.2. ∀ T P ∈ P a . (cid:74) P (cid:75) (cid:90) = ⇒ T P implies T P (cid:54)⇓ Hence, any occurrence of (cid:88) —if there is any—in the context C { a , b }| ([ · ] , [ · ] ) is input guarded (since π a forbids output guards) and the context can not remove such a guard on its own. In opposite to P wechoose Q such that Q reaches an unguarded occurrence of success in any of its executions, i.e. Q (cid:90) = ⇒ Q (cid:48) implies Q (cid:48) ⇓ for all Q (cid:48) ∈ P mix . By operational completeness, any execution Q (cid:55)−→ Q (cid:55)−→ Q (cid:54)(cid:55)−→ of Q can be simulated by its encoding, i.e. (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48) , (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48) , and (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48)(cid:48) , where Q (cid:48) i (cid:16) (cid:74) Q i (cid:75) for . Peters, J.-W. Schicke & U. Nestmann i ∈ { , } and Q (cid:48)(cid:48) (cid:16) (cid:74) Q (cid:75) . Note that any (maximal) execution of Q is such that Q (cid:55)−→ Q (cid:55)−→ Q (cid:54)(cid:55)−→ for some Q , Q ∈ P mix . By operational soundness for each T Q ∈ P a such that (cid:74) Q (cid:75) (cid:90) = ⇒ T Q , there issome Q (cid:48) ∈ P mix such that Q (cid:90) = ⇒ Q (cid:48) and T Q (cid:90) = ⇒(cid:16) (cid:74) Q (cid:48) (cid:75) , i.e. there is some T (cid:48) Q ∈ P a such that T Q (cid:90) = ⇒ T (cid:48) Q and T (cid:48) Q (cid:16) (cid:74) Q (cid:48) (cid:75) . By success sensitiveness, (cid:74) Q (cid:48) (cid:75) ⇓ and since (cid:16) is success respecting, we have T (cid:48) Q ⇓ .Thus, by Definition 2.9, we have T Q ⇓ for all T Q ∈ P a with (cid:74) Q (cid:75) (cid:90) = ⇒ T Q . Observation 3.3. ∀ T Q ∈ P a . (cid:74) Q (cid:75) (cid:90) = ⇒ T Q implies T Q ⇓ At last we choose R such that some of its executions lead to success while some do not. R can reduceeither to (cid:88) or to . By operational completeness, (cid:74) R (cid:75) can simulate both steps, i.e. (cid:74) R (cid:75) (cid:90) = ⇒(cid:16) (cid:74) (cid:88) (cid:75) and (cid:74) R (cid:75) (cid:90) = ⇒(cid:16) (cid:74) (cid:75) . Since (cid:74) (cid:88) (cid:75) ⇓ and (cid:74) (cid:75) (cid:54)⇓ , and since (cid:16) is success respecting, we have (cid:74) (cid:88) (cid:75) (cid:54)(cid:16) (cid:74) (cid:75) . Byoperational soundness, for all T R ∈ P a such that (cid:74) R (cid:75) (cid:90) = ⇒ T R there is some R (cid:48) ∈ P mix such that R (cid:90) = ⇒ R (cid:48) and T R (cid:90) = ⇒(cid:16) (cid:74) R (cid:48) (cid:75) . Observation 3.4. ∃ T R , , T R , ∈ P a . (cid:74) R (cid:75) (cid:90) = ⇒ T R , ∧ (cid:74) R (cid:75) (cid:90) = ⇒ T R , ∧ T R , ⇓ ∧ T R , (cid:54)⇓ Our last observation concerns the structure of the context C { a , b }| ([ · ] , [ · ] ) . Because there is nochoice operator in P a , the context C { a , b }| ([ · ] , [ · ] ) has to place its parameters in parallel, as this is theonly binary operator for processes. However, even if we allow separate choice in the target languagethe encodings of the parameters have to be placed in parallel, because placing them within a choicewould not allow to use the encodings of both parameters to simulate target term steps. Consequently,there must be some P a -contexts C ([ · ]) , C ([ · ]) , C ([ · ]) with (cid:74) S | S (cid:75) ≡ C { a , b }| ( (cid:74) S (cid:75) , (cid:74) S (cid:75) ) ≡ C ( C ( (cid:74) S (cid:75) ) | C ( (cid:74) S (cid:75) )) , for all source terms S , S ∈ P mix with fn ( S | S ) = { a , b } . Observation 3.5. ∃ C ([ · ]) , C ([ · ]) , C ([ · ]) . C { a , b }| ([ · ] , [ · ] ) ≡ C ( C ([ · ] ) | C ([ · ] )) Learning from the separation result in [13], we know that any good encoding from π mix into π a mustbreak source term symmetries. To do so, we show that the context introduced by the encoding of theparallel operator (which is allowed in weakly compositional as opposed to homomorphic translations)must interact with the encodings of its parameters. Lemma 3.6.
To simulate a source term step, C { a , b }| ([ · ] , [ · ] ) and the encodings of its parameters haveto interact. Intuitively we show, that if there is no such interaction, then since Q is a symmetric network itsencoding behaves as a symmetric network again. Since any execution of (cid:74) Q (cid:75) leads to an unguardedoccurrence of success, by symmetry and by Lemma 2.4 there is an execution of (cid:74) P (cid:75) leading to anunguarded occurrence of success, which contradicts Observation 3.2. Proof.
Assume the opposite, i.e. assume the context C { a , b }| ([ · ] , [ · ] ) is such that possibly after somepreprocessing steps of the context on its own, e.g. to unguard the parameters, the source term steps canbe simulated without any interaction with the context. In this case, we have (cid:74) Q (cid:75) . = C { a , b }| ( (cid:74) P (cid:75) | (cid:74) P (cid:75) ) . ≡ C ( C ( (cid:74) P (cid:75) ) | C ( (cid:74) P (cid:75) )) (cid:90) = ⇒ ( ν ˜ y ) ( σ ( (cid:74) P (cid:75) ) | σ ( (cid:74) P (cid:75) ) | T C ) for some constant term T C , a sequence of names ˜ y , and two substitutions σ and σ . Note that σ and σ capture renaming done by alpha conversion possibly necessary to pull restriction outwards. Since thereis no need for an interaction, i.e. for a communication, with T C to simulate source term steps, we canignore it.If σ = σ , then since these substitutions result from alpha conversion σ = σ = id. Then (cid:74) P (cid:75) | (cid:74) P (cid:75) is a symmetric network of degree 2 with id as symmetry relation. By Lemma 2.3, (cid:74) P (cid:75) | (cid:74) P (cid:75) has a8 Synchrony vs Causality in the Asynchronous Pi-Calculus symmetric execution. By Observation 3.3, (cid:74) Q (cid:75) reaches an unguarded occurrence of success in anyof its executions. Since the context and with it T C can not reach success on its own and there is nointeraction, (cid:74) P (cid:75) | (cid:74) P (cid:75) reaches success in its symmetric execution. Then there is some T (cid:48)(cid:48) Q ∈ P a such that (cid:74) P (cid:75) | (cid:74) P (cid:75) (cid:90) = ⇒ ( ν ˜ x ) (cid:16) T (cid:48)(cid:48) Q | σ (cid:16) T (cid:48)(cid:48) Q (cid:17)(cid:17) is a symmetric execution for some sequence of names ˜ x and somesymmetry relation σ of degree 2 and ( ν ˜ x ) (cid:16) T (cid:48)(cid:48) Q | σ (cid:16) T (cid:48)(cid:48) Q (cid:17)(cid:17) has an unguarded occurrence of success. Bysymmetry and since n ( (cid:88) ) = /0, this implies that T (cid:48)(cid:48) Q as well as σ (cid:16) T (cid:48)(cid:48) Q (cid:17) has an unguarded occurrence ofsuccess. Since 2 is not the minimal degree of identity, by Lemma 2.4, this symmetric execution can besubdivided such that (cid:74) P (cid:75) (cid:90) = ⇒ ( ν ˜ x (cid:48) ) T (cid:48)(cid:48) Q for some sequence of names ˜ x (cid:48) . Then (cid:74) P (cid:75) ⇓ , because of theunguarded occurrence of (cid:88) in T (cid:48)(cid:48) Q . That contradicts Observation 3.2.The argumentation for σ (cid:54) = σ is similar but more difficult. In this case σ ( (cid:74) P (cid:75) ) | σ ( (cid:74) P (cid:75) ) is still asymmetric network whose symmetric execution leads to an unguarded occurrence of success. But sinceits symmetry relation is not id we can not apply Lemma 2.4. However, because σ and σ result fromalpha conversion, they rename free names of (cid:74) P (cid:75) to fresh names. If σ ( (cid:74) P (cid:75) ) and σ ( (cid:74) P (cid:75) ) want tointeract on such a fresh name, then they have first to exchange this fresh name over a channel known toboth. Let us denote this channel by z . So either σ ( (cid:74) P (cid:75) ) receives a fresh name from σ ( (cid:74) P (cid:75) ) over z or vice versa. By symmetry both terms have an unguarded input as well as an unguarded output on z ,so—instead of a communication between these two processes— σ ( (cid:74) P (cid:75) ) can as well reduce on its own.Adding this argumentation to the argumentation in the proof of Lemma 2.4 in [13] we can prove againthat the symmetric execution of σ ( (cid:74) P (cid:75) ) | σ ( (cid:74) P (cid:75) ) can be subdivided such that σ ( (cid:74) P (cid:75) ) ⇓ . Because n ( (cid:88) ) = /0, this implies (cid:74) P (cid:75) ⇓ . Hence, to simulate a source term step, the context necessarily has tointeract with its parameters.Note that the only possibility for the context to interact with its parameters is by communication.So the context contains at least one capability, i.e., input or output prefix, that needs to be consumed tosimulate a source term step. Without loss of generality let us assume that indeed only a single capabilityneeds to be consumed to simulate a step, i.e. a single communication step of the context with (one of)its parameters suffices to enable the simulation of a source term step. The argumentation for a couple ofnecessary communication steps is similar. Let us denote this capability by µ .Next we show, that it is not possible to simulate two different source term steps between the param-eters of C { a , b }| ([ · ] , [ · ] ) at the same time. Lemma 3.7.
At most one simulation of a source term step can be enabled concurrently by the context C { a , b }| ([ · ] , [ · ] ) . Here we use R as a counterexample. R can reduce either to or (cid:88) , so the choice operator introducesmutual exclusion. Without choice mutual exclusion is not that easy to implement, because of its abilityto immediately block an alternative reduction. We show that the simulation of these blocking introduceseither deadlock or divergence. Proof.
Assume the opposite, i.e. assume that the context C { a , b }| ([ · ] , [ · ] ) provides several instancesof µ , e.g. by replication, and with them it enables the simulation of different alternative source termsteps concurrently. Consider the source term R . Since { a / b , b / a } ( a + b + b . (cid:88) ) = b + a + a . (cid:88) , by name In case of a sequence of necessary steps, choose µ such that it denotes the capability consumed at last in this sequence. Incase there are different ways to enable the simulation of a step, consider a set of those capabilities with one µ i for each suchway. . Peters, J.-W. Schicke & U. Nestmann σ (cid:48) such that σ (cid:48) ( (cid:74) a + b + b . (cid:88) (cid:75) ) ≡ α (cid:74) b + a + a . (cid:88) (cid:75) , i.e. these twoterms are equal except to some renamings of free names.Note that R can perform either a step on channel a or b . Since there is no choice operator in π a ,the encodings of the capabilities of the sums a + b + b . (cid:88) and b + a + a . (cid:88) have to be placed somehowin parallel such that the simulation of the source term step on a does not immediately withdraw theencodings of the capabilities on b and vice versa. Thus, since the simulation of both steps of R are enabledconcurrently, there is some point in the simulation of one source term step that disables the completion ofthe simulation of the respective other source term step. Therefore, one simulation has to consume somecapability that is necessary to complete the other simulation. Remember, that we assume that the onlycapability of the context C { a , b }| ([ · ] , [ · ] ) necessary to be consumed to simulate a source term step is µ .Hence, to allow the simulation of one step of R , to disable the simulation of the respective other step of R , and since σ (cid:48) ( (cid:74) a + b + b . (cid:88) (cid:75) ) ≡ α (cid:74) b + a + a . (cid:88) (cid:75) , there is some capability in (cid:74) a + b + b . (cid:88) (cid:75) as wellas in (cid:74) b + a + a . (cid:88) (cid:75) and, to simulate a source term step, both of these capabilities have to be consumed.Moreover, since σ (cid:48) ( (cid:74) a + b + b . (cid:88) (cid:75) ) ≡ α (cid:74) b + a + a . (cid:88) (cid:75) , both capabilities are of the same kind, i.e. bothare either input prefixes or both are output prefixes. Note that there is no possibility in π a to consumetwo capabilities of the same kind within the same target term step. Then it can not be avoided that foreach of the simulations of the two steps exactly one of these two capabilities is consumed. In this case,none of the simulations can be completed, i.e., there is some local deadlock.Considering (cid:74) Q (cid:75) , such a deadlock leads to a term T Q with (cid:74) Q (cid:75) (cid:90) = ⇒ T Q and, since none of the sourceterm steps is simulated, no unguarded occurrence of (cid:88) is reached, i.e. T Q (cid:54)⇓ . With that, such a deadlockleads to a contradiction.The only way to circumvent a deadlock in this situation, is that one of these capabilities is releasedby one of these simulations. To complete the simulation of this step later on, it has to be possible that thereleased capability is consumed again. But then it can not be avoided that this is done before the othersimulation is finished, i.e. that leads back to the situation before. Then we introduce divergence, i.e.,contradicts divergence reflection. Thus, it is not possible that the simulation of alternative source termsteps is enabled concurrently.Note that, even if we allow separate choice within the target language, the simulation of the sourceterm step on a can not immediately withdraw the encodings of the capabilities on b and vice versa.Because either we have to split each of these sums into an input and an output guarded sum or wehave to convert each of them into a single sum with only separate choice. In the second case, since bycompositionality both parameters have to be encoded in exactly the same way, we either result in twoinput guarded or two output guarded sums. But then we can not simulate a communication between theseto sums within a single step and, moreover, we can not decide within a single step whether a consideredcapability can be used to successfully simulate a source term step. Unfortunately, the first step usedto try whether we can use this capability to simulate a source term step removes all the other encodedcapabilities of that sum, which violates operational correspondence. So Lemma 3.7 holds as well forseparate choice in the target language.Now we can prove the main result of this paper. Theorem 3.8.
Any good encoding from π mix into π a introduces additional causal dependencies. By Lemma 3.6 and Lemma 3.7 the context must contain some capability, i.e. some kind of lock, thatmust be consumed to enable the simulation of a source term step. Moreover, to enable the simulation ofsubsequent source term steps, the capability that was consumed from the context must be restored. Then,a subsequent simulating sequence is enabled by the consumption of a capability that was produced by a00
Synchrony vs Causality in the Asynchronous Pi-Calculus former simulating sequence; this interplay imposes a causal dependence between subsequent simulationsof source term steps.
Proof.
By Lemma 3.6 and Lemma 3.7 the context C { a , b }| ([ · ] , [ · ] ) provides exactly one instance of µ , i.e., one capability, that needs to be consumed to simulate a source term step. Since (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48) , (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48) , and (cid:74) Q (cid:75) (cid:90) = ⇒ Q (cid:48)(cid:48) , where Q (cid:48) i (cid:16) (cid:74) Q i (cid:75) for i ∈ { , } and Q (cid:48)(cid:48) (cid:16) (cid:74) Q (cid:75) , i.e. (cid:74) Q (cid:75) simulatestwo subsequently source term steps, the instance of µ consumed by the simulation of the first source termstep has to be restored during this first simulation such that the second step can be simulated. Thus, thesimulation of the second step has to consume some capability µ produced by the simulation of the firststep. Then the simulation of the second step causally depends on the simulation of the first step, althoughany pair of subsequent steps of Q are causal independent. We conclude that the encoding function addsadditional causal dependencies. It is no fortuity that the counterexamples in the proof of Theorem 3.8 rely on mixed choices. It is thepower of mixed choice that allows π mix to break initial symmetries. Note that the separation resultsof [11] and [13] are based on that absolute difference in the expressive power of π mix compared to π -calculus variants without mixed choice. In [6], the role of breaking symmetries for the separation resultis not equally obvious. Nevertheless, the counterexample used there also relies on mixed choices andtheir ability to break symmetries. In summary, the difference in the expressive power of π mix comparedto π a essentially—if not exclusively—relies on the expressive power of mixed choice. Synchrony versus Guarded Choice.
It is debatable in how far a discussion on synchrony versus asyn-chrony can be separated from a discussion of choice. In fact, even from a pragmatic point of view withinour model of distributed reactive systems, it cannot. It is part of the nature of reactive systems—in ourcase: systems communicating via message-passing along channels—that agents do not only listen to onechannel at a time; they concurrently listen to a whole selection of channels. In this respect, as soon as acalculus offers a synchronous (blocking) input primitive, it is natural to extend this primitive to an input-guarded choice. Having mutual exclusion on concurrently enabled inputs is useful when thinking of aprocess’s local state that may be influenced differently by any received information along the competinginput channels. (
Joint input [9], as motivated in the join calculus [4], represents another natural and in-teresting generalisation.) Likewise, as soon as a calculus offers synchronous output, one may generalisethis primitive to output-guarded choice. This generalisation seems less natural, though, as the process’sstate would hardly be influenced by a continuation of one of the branches after an output. However,having both input- and output-guards in the calculus, mixed choice becomes expressible. Mixed choiceis again also natural, as the successful execution of an output may prevent a competing input, includingthe effect of the latter on the local state. These pragmatic arguments support the point of view that, in amessage-passing scenario, any discussion of synchronous versus asynchronous interaction must considera competitive context, as expressed by means of choice operators.
The Role of Mixed Choice.
In the proof of Lemma 3.6 a counterexample based on mixed choice isused not only to rule out that the parallel operator is translated homomorphically (compare to the ar-gumentation of the separation results in [11], [6], and [13]) but also to prove that, in order to breaksymmetries, the context introduced to encode the parallel operator must interact with the encodings of itsparameters. Moreover, only with mixed choice it is possible to give an example of a symmetric network . Peters, J.-W. Schicke & U. Nestmann π sep into π a . To rule out conflicting steps on the same sum, the encoding of a sumintroduces a so-called sum lock: a boolean-valued lock that is initially instantiated with true, but is set tofalse as soon as some summand of the sum commits to successfully simulate a source term step. Usingthis sum lock, the encoding function does not only forbid to use that summand twice but also to useany other summand of that sum to simulate subsequently source term steps. Hence, this sum lock of theencoding in [10] does exactly occupy the role of the capability consumed by the simulation of a step torule out the simulation of a conflicting step as described above. Note that the encoding presented in [10]translates the parallel operator homomorphically and thus is no good encoding from π mix into π a . [10]explains that the application of that encoding function to terms with mixed choices—or, more precisely,in the presence of mixed choices with cyclic dependencies between the links of matching capabilities asin R or Q —leads to exactly the deadlocked situation described in the proof above.The proof of Theorem 3.8 reveals the solution to circumvent this problem with the so-called cyclicsums. The encoding of the parallel operator has to ensure that there is at most one simulation of a sourceterm step between the two parameters of that parallel operator at the same time. In fact, it is exactly thisrequired blocking of alternative steps that leads to additional causal dependencies. As claimed in [14],the introduction of a lock at the level of the encoding of a parallel operator indeed suffices to circumventthe problem of cyclic sums. Note that, in the appendix of [14], the main line of argumentation of arespective proof is given. Moreover, it is explained in more detail how—in the context of that particularencoding—this lock leads to additional causal dependencies. It is the temporal blocking of the simulationof source term steps, necessary to avoid deadlock or divergence in case of conflicting source term steps,that leads to additional causal dependencies in case of concurrent source term steps. True Concurrency.
Let us take a closer look at the kind of additional causal dependencies an encodingfunction must, according to the above proof, introduce to encode synchronous interactions. We observethat the proof induces a causal dependency between the simulation of all subsequent source term stepsbetween the two sides of the same parallel operator. Moreover, we observe that a later such step iscausally dependent of a former one, but that no fixed order on the simulations of steps is induced. Thesame observation holds for the encoding presented in [14]: it is forced to block some simulations of02
Synchrony vs Causality in the Asynchronous Pi-Calculus a b c
Figure 3: A fully reached, pure M [17].source term steps until another simulation is finished, but there is no directive on which kind of stepshave to be simulated first. Hence, the simulation of two concurrent source term steps can still appear ineither order but, if both steps are due to a communication over the same parallel operator, i.e. betweenthe same processes of a network, the corresponding simulations can not be performed truly concurrently.Note that the proof above does not state that the described additional causal dependency is the only kindof causal dependency any good encoding of synchrony has to introduce. This might be an interestingquestion for further research. We show that, in the context of the π -calculus, any good encoding of synchronous interaction withina purely asynchronous setting introduces additional causal dependencies and, thus, reduces the numberof truly parallel steps of the original term. Moreover, the proof of this result further illustrates theimportance of the role of mixed choice to distinguish the expressive power of π mix and π a . To tightenthat view on the importance of mixed choice for the introduction of causal dependencies in encodingsynchrony, we have to show that the encoding from π sep into π a presented in [10] does not introduceadditional causal dependencies. This statement seems intuitively correct; we leave its formal proof forfurther research.As already mentioned, there is a companion paper [17] that proves a result similar to Theorem 3.8 inthe context of Petri nets. More precisely, they show that it is not always possible to find a finite, 1-safe,distributed net which is completed pomset trace equivalent to a given net. Note that completed pomsettrace equivalence is sensitive to (local) deadlocks and causal dependencies. Hence, the connection be-tween synchronous interactions and causalities, i.e., that any good encoding of synchrony changes thecausal semantics of the source, is no effect of the representation of concurrent systems in either the π -calculus or Petri nets, but seems to be a phenomenon of synchronous and asynchronous interactions ingeneral.A closer look at the proof in [17] reveals that this proof depends on a counterexample including aso-called fully reached pure M (see Figure 3). Similarly, our result depends on counterexamples that aresymmetric networks including mixed choices. In both cases, the counterexample refers to a situation inthe synchronous setting in which there are two distinct but conflicting steps. To solve this conflict, twosimultaneous activities are necessary—in case of the π -calculus the reduction of two sums, in case ofPetri nets the removal of two tokens. In the asynchronous setting, this simultaneous solution must beserialised, e.g., by means of some kind of lock. It blocks the enabling of the asynchronous simulationsof source term steps, such that no two simulations of conflicting source steps are enabled concurrently.In both formalisms, Petri nets and the π -calculus, it is this temporal blocking of the simulation of sourceterm steps—necessary to avoid deadlock or divergence in case of conflicting source term steps—thatleads to additional causal dependencies.However, apart from this apparent similarity, the relation between the two results leaves us with a . Peters, J.-W. Schicke & U. Nestmann π -calculus implementations and Petri net implementations take rather different forms. Ad-ditionally, in contrast to the Petri net result, the present paper has no need to employ a (pomset based)equivalence to compare source and target terms and also does not need to deal with infinite implementa-tions specifically. On the other hand, the Petri net result does not impose any restrictions on the encodingitself, but it connected source and target nets by means of behaviour only without any reference to the netstructure. Finally, the Petri nets considered in [17] are not Turing-complete. So is it possible to derivethe same result considering a Turing-complete formalism as for instance Petri nets with inhibitor arcs?We hope to answer some of these questions in future work. References [1] Michele Boreale & Davide Sangiorgi (1998):
A fully abstract semantics for causality in the π -calculus . ActaInf.
Asynchrony and the π -calculus (note) . Note, INRIA.[3] Marco Carbone & Sergio Maffeis (2003): On the Expressive Power of Polyadic Synchronisation in π -Calculus . Nordic Journal of Computing
The Reflexive Chemical Abstract Machine and the Join-Calculus .In:
Proceedings of POPL ’96 , ACM, pp. 372–385, doi:10.1145/237721.237805.[5] Daniele Gorla (2008):
Comparing Communication Primitives via their Relative Expressive Power . Inf. &Comp.
Towards a Unified Approach to Encodability and Separation Results for ProcessCalculi . Inf. & Comp.
Notes on Soundness of a Mapping from π -calculus to ν -calculus . With commentsadded in October 1993.[8] Kohei Honda & Mario Tokoro (1991): An Object Calculus for Asynchronous Communication . In:
ECOOP’91 , LNCS
On the Expressive Power of Joint Input . In Catuscia Palamidessi & Ilaria Castel-lani, editors:
Proceedings of EXPRESS ’98 , ENTCS
What is a ”Good” Encoding of Guarded Choice?
Inf. & Comp.
Comparing the Expressive Power of the Synchronous and the Asynchronous π -calculi . MSCS
Expressiveness of Process Algebras . ENTCS
Breaking Symmetries . In:
EXPRESS’10 , EPTCS
41, pp. 136–150,doi:10.4204/EPTCS.41.10.[14] Kirstin Peters & Uwe Nestmann (2011):
Breaking Symmetries . Submitted to MSCS.[15] Corrado Priami (1996):
Enhanced Operational Semantics for Concurrency . Ph.D. thesis, Universita’ diPisa-Genova-Udine.[16] Davide Sangiorgi & David Walker (2001):