Global types and event structure semantics for asynchronous multiparty sessions
Ilaria Castellani, Mariangiola Dezani-Ciancaglini, Paola Giannini
aa r X i v : . [ c s . L O ] F e b GLOBAL TYPES AND EVENT STRUCTURE SEMANTICSFOR ASYNCHRONOUS MULTIPARTY SESSIONS
ILARIA CASTELLANI, MARIANGIOLA DEZANI-CIANCAGLINI, AND PAOLA GIANNINIINRIA, Université Côte d’Azur, Sophia Antipolis, France e-mail address : [email protected] di Informatica, Università di Torino, Italy e-mail address : [email protected] di Scienze e Innovazione Tecnologica, Università del Piemonte Orientale, Italy e-mail address : [email protected] bstract . We propose an interpretation of multiparty sessions with asynchronous commu-nication as
Flow Event Structures . We introduce a new notion of global type for asynchronousmultiparty sessions, ensuring the expected properties for sessions, including progress. Ourglobal types, which reflect asynchrony more directly than standard global types and aremore permissive, are themselves interpreted as
Prime Event Structures . The main result isthat the Event Structure interpretation of a session is equivalent, when the session is typable,to the Event Structure interpretation of its global type.
1. I ntroduction
Session types describe interactions among a number of participants, which proceed ac-cording to a given protocol. They extend classical data types by specifying, in addition tothe type of exchanged data, also the interactive behaviour of participants, namely the se-quence of their input / output actions towards other participants. The aim of session types isto ensure safety properties for sessions, such as the absence of communication errors (no typemismatch in exchanged data) and deadlock-freedom (no standstill until every participant isterminated). Sometimes, a stronger property is targeted, called progress (no participantwaits forever).Initially conceived for describing binary protocols in the π -calculus [THK94, HVK98],session types have been later extended to multiparty protocols [HYC08, HYC16] and Key words and phrases:
Communication-based Programming, Process Calculi, Event Structures, MultipartySession Types.This research has been supported by the ANR17-CE25-0014-01 CISC project.Partially supported by EU H2020-644235 Rephrase project, EU H2020-644298 HyVar project, IC1402 ARVIand Ateneo / CSP project RunVar.This original research has the financial support of the Università del Piemonte Orientale.
Preprint submitted toLogical Methods in Computer Science © I. Castellani, M. Dezani-Ciancaglini, and P. Giannini CC (cid:13) Creative Commons mbedded into a range of functional, concurrent, and object-oriented programming lan-guages [ABB + global type that describes the whole sessionprotocol, and local types that describe the contributions of the individual participants to theprotocol. The key requirement in order to achieve the expected safety properties is that alllocal types be obtained as projections from the same global type.Communication in sessions is always directed from a given sender to a given receiver.It can be synchronous or asynchronous. In the first case, sender and receiver need tosynchronise in order to exchange a message. In the second case, messages may be sentat any time, hence a sender is never blocked. The sent messages are stored in a queue,where they may be fetched by the intended receiver. Asynchronous communication isoften favoured for multiparty sessions, since such sessions may be used to model webservices or distributed applications, where the participants are spread over di ff erent sites.Session types have been shown to bear a strong connection with models of con-currency such as communicating automata [DY12], as well as with message-sequencecharts [HYC16], graphical choreographies [LTY15, TG18], and various brands of linearlogics [CP10, TCP11, Wad14, PCPT14, CPT16].In a companion paper [CDG19a], we investigated the relationship between synchro-nous multiparty sessions and Event Structures (ESs) [Win88], a well-known model ofconcurrency which is grounded on the notions of causality and conflict between events.We considered a simple calculus, where sessions are described as networks of sequentialprocesses [DCGJ + Flow Event Structures (FESs) [BC88, BC94], as well as an inter-pretation of global types as
Prime Event Structures (PESs) [Win80, NPW81]. We showed thatfor typed sessions these two interpretations agree, in the sense that they yield isomorphicdomains of configurations.In the present paper, we undertake a similar endeavour in the asynchronous setting.This involves devising a new notion of global type for asynchronous networks. We startby considering a core session calculus as in the synchronous case, where processes are onlyable to exchange labels, not values, hence local types coincide with processes and globaltypes may be directly projected to processes. Moreover, networks are now endowed witha queue and they act on this queue by performing outputs or inputs: an output stores amessage in the queue, while an input fetches a message from the queue.To illustrate the di ff erence between synchronous and asynchronous sessions and mo-tivate the introduction of new global types for the latter, let us discuss a simple example.Consider the network: N = p [[ q ! λ ; q ? λ ′ ]] k q [[ p ! λ ′ ; p ? λ ]]where each of the participants p and q wishes to first send a message to the other one andthen receive a message from the other one.In a synchronous setting this network is stuck, because a network communicationarises from the synchronisation of an output with a matching input, and here the output q ! λ of p cannot synchronise with the input p ? λ of q , since the latter is guarded by the output p ! λ ′ . Similarly, the output p ! λ ′ of q cannot synchronise with the input q ? λ ′ of p . Indeed,this network is not typable because any global type for it should have one of the two forms: G = p → q : λ ; q → p : λ ′ G = q → p : λ ′ ; p → q : λ owever, neither of the G i projects down to the correct processes for both p and q in N .For instance, G projects to the correct process q ! λ · q ? λ ′ for p , but its projection on q is p ? λ · p ! λ ′ , which is not the correct process for q .In an asynchronous setting, on the other hand, this network is run in parallel with aqueue M , which we indicate by N k M , and it can always move for whatever choice of M . Indeed, the moves of an asynchronous network are no more complete communicationsbut rather “communication halves”, namely outputs or inputs. For instance, if the queueis empty, then N k ∅ can move by first performing the two outputs in any order, andthen the two inputs in any order. If instead the queue contains a message from p to q with label λ , followed by a message from q to p with label λ , which we indicate by M = h p , λ , q i · h q , λ , p i , then the network will be stuck after performing the two outputs,since the two messages on top of the queue will not be those expected by p and q . Hencewe look for a notion of global type that accepts the network N k ∅ but rejects the network N k h p , λ , q i · h q , λ , p i .The idea for our new asynchronous global types is quite simple: to split communicationsinto outputs and inputs, and to add a queue to the type, thus mimicking very closelythe behaviour of asynchronous networks. Hence, our global types have the form G k M .Clearly, we must impose some well formedness conditions on such global types, takinginto account also the content of the queue. Essentially, this amounts to requiring that eachinput appearing in the type be justified by a preceding output or by a message in the queue,and vice versa, that each output or message in the queue be matched by a correspondinginput.With our new global types, it becomes now possible to type the network N k ∅ withthe asynchronous global type G k ∅ , where G = pq ! λ ; qp ! λ ′ ; pq ? λ ; qp ? λ ′ , or with the otherglobal types obtained from it by swapping the outputs or the inputs. Instead, the network N k h p , λ , q i · h q , λ , p i will be rejected because the global type G k h p , λ , q i · h q , λ , p i is notwell formed, since its two inputs do not match the first two messages in the queue.A di ff erent solution was proposed in [MYH09] by means of an asynchronous subtypingrelation on local types which allows outputs to be anticipated. In our setting this boils downto a subtyping relation on processes yielding both q ! λ ; q ? λ ′ ≤ q ? λ ′ ; q ! λ and p ! λ ′ ; p ? λ ≤ p ? λ ; p ! λ ′ . With the help of this subtyping, both G and G become types for the network N k ∅ above. Unfortunately, however, this subtyping turned out to be undecidable [BCZ17,LY17].To define our interpretations of asynchronous networks and asynchronous global typesinto Flow and Prime Event Structures, respectively, we follow the same schema as for theirsynchronous counterparts in our previous work [CDG19a]. In particular, the events of theESs are defined syntactically and they record their “history”. More specifically, the eventsof the FES associated with a network record their local history (namely the past actions ofthe involved participants), while the events of the PES associated with a global type recordthe global history of the computation (the whole sequence of past communications) andthus they must be quotiented by a permutation equivalence. However, while in [CDG19a]an event represented a communication between two participants, here it represents anoutput or an input pertaining to a single participant. As a consequence, some care mustbe taken in defining the causality relation, and in particular the “cross-causality” betweenan output and the matching input, which are events pertaining to di ff erent participants.A further complication is due to the presence of the queue, which appears inside theevents themselves. Therefore, our ES semantics for the asynchronous setting is far from eing a trivial adaptation of that given in [CDG19a] for the synchronous setting. Toour knowledge, this is also the first time Event Structures are used to give semanticsto an asynchronous calculus. We should mention, however, that a semantics for finiteasynchronous choreographies by means of PESs was recently proposed in [LMT20].To sum up, the contribution of this paper is twofold:1) We propose an original syntax for asynchronous global types, which, in our view,models asynchronous communication in a more natural way than existing approaches. Inthe literature, typing rules for asynchronous multiparty session types use the standardsyntax of global types, as introduced in [HYC16]. Typability of asynchronous networks isenhanced by the subtyping first proposed in [MYH09], which, however, was later shownto be undecidable [BCZ17, LY17]. Our type system is more permissive than the standardone [HYC16] – in particular, since it allows outputs to take precedence over inputs asin [MYH09], a characteristics of asynchronous communication – but it remains decidable.We show that our new global types ensure classical safety properties as well as progress.2) We present an Event Structure semantics for asynchronous networks and for ournew asynchronous global types. Networks are interpreted as FESs and asynchronousglobal types are interpreted as PESs. Our main result here is an isomorphism between theconfiguration domains of the FES of a typed network and the PES of its global type.The paper is organised as follows. Section 2 introduces our calculus for asynchronousmultiparty sessions. In Section 3 we recap from previous work the necessary materialabout Event Structures. In Section 4 we recall our interpretation of processes as PESs, takenfrom our companion paper [CDG19a]. In Section 5 we present our interpretation of asyn-chronous networks as FESs. Section 6 introduces our new global types and the associatedtype system, establishing its main properties. In Section 7 we define our interpretation ofasynchronous global types as PESs. Finally, in Section 8 we prove the equivalence betweenthe FES semantics of a network and the PES semantics of its global type. We conclude witha discussion on related work and future research directions in Section 9.2. A C ore C alculus for M ultiparty S essions We now formally introduce our calculus, where multiparty sessions are represented asnetworks of processes. The operational semantics is based on asynchronous communication ,where message emission is non-blocking and sent messages are stored in a queue whilewaiting to be read by their receiver.We assume the following base sets: participants , ranged over by p , q , r and forming theset Part , and labels , ranged over by λ, λ ′ , . . . and forming the set Lab .Let π ∈ { p ! λ, p ? λ | p ∈ Part , λ ∈ Lab } denote an atomic action . The action p ! λ representsan output of label λ to participant p , while the action p ? λ represents an input of label λ from participant p . Definition 2.1 (Processes and their tree representations) . Processes are defined in three steps: (1) Pre-processes are coinductively defined by:P :: = coind L i ∈ I p ! λ i ; P i | Σ i ∈ I p ? λ i ; P i | where I is non-empty and λ j , λ h for all j , h ∈ I, j , h, i.e. labels in choices are all di ff erent. (2) The tree representation of a pre-process is a directed rooted tree, where: (a) each internal nodeis decorated by p ! or p ? and has as many children as the number of branches of the choice, (b) he edge from p ! or p ? to the child P i is decorated by λ i and (c) the leaves of the tree (if any) aredecorated by . (3) A process is a pre-process whose tree representation is regular (namely, it has finitely manydistinct sub-trees). Processes of the shape L i ∈ I p ! λ i ; P i and Σ i ∈ I p ? λ i ; P i are called output and input processes , respectively. In the following, we will omit trailing ’s when writing processes. We identify processeswith their tree representations and we shall sometimes refer to the trees as the processesthemselves. The regularity condition implies that we only consider processes admitting afinite description. Internal choice ( L ) and external choice ( P ) are assumed to be associativeand commutative, so the order of children of a node does not matter.Observe that, because of the condition λ j , λ h in choices, each path starting fromthe root in the tree representation of a process is uniquely identified by the sequence ofdecorations along its nodes and edges.Note that identifying processes with their tree representations is equivalent to writingprocesses with the µ -notation and considering them up to an equality which allows foran infinite number of unfoldings. This is also called the equirecursive approach, since itviews processes as the unique solutions of recursive equations [Pie02] (Section 20.2). Theexistence and uniqueness of a solution follow from known results (see [Cou83] and alsoTheorem 7.5.34 of [CC13]). When writing processes, we shall use (mutually) recursiveequations.In a full-fledged calculus, labels would carry values, namely the exchanges betweenparticipants would be of the form λ ( v ). For simplicity, we consider only labels here. Thiswill allow us to project global types directly to processes, without having to explicitlyintroduce local types, see Section 6.In our calculus, sent labels are stored in a queue together with sender and receivernames, from where they are subsequently fetched by the receiver .We define messages to be triples h p , λ, q i and message queues (or simply queues ) to bepossibly empty sequences of messages: M :: = ∅ | h p , λ, q i · M The order of messages in the queue is the order in which they will be read. Since theonly reading order that matters is that between messages with the same sender and thesame receiver, we consider message queues modulo the structural equivalence given by:
M · h p , λ, q i · h r , λ ′ , s i · M ′ ≡ M · h r , λ ′ , s i · h p , λ, q i · M ′ if p , r or q , s The equivalence ≡ says that a global queue may be split into a set of reading queues , onefor each participant (in which messages with di ff erent senders are not ordered), or evenfurther, into a set of channel queues, one for each unidirectional channel pq or ordered pair( p , q ) of participants.Note in particular that h p , λ, q i · h q , λ ′ , p i ≡ h q , λ ′ , p i · h p , λ, q i . These two equivalent queuesrepresent a situation in which both participants p and q have sent a label to the otherone, and neither of them has read the label sent by the other one. This situation mayindeed happen in a network with asynchronous communication. Since the two sendsoccur in parallel, namely, in any order, the order of the corresponding messages in thequeue should be irrelevant. This point will be further illustrated by Example 2.4. We need a queue instead of a multiset, because we want labels between two participants to be read in thesame order in which they are sent. [[ L i ∈ I q ! λ i ; P i ]] k N k M pq ! λ k −−−−→ p [[ P k ]] k N k M · h p , λ k , q i where k ∈ I [S end ] q [[ Σ j ∈ J p ? λ j ; Q j ]] k N k h p , λ k , q i · M pq ? λ k −−−−→ q [[ Q k ]] k N k M where k ∈ J [R cv ] Figure 1:
LTS for networks.Networks are comprised of located processes of the form p [[ P ]] composed in parallel,each with a di ff erent participant p , and by a message queue. Definition 2.2 (Networks) . Networks are defined by: N k M where N = p [[ P ]] k · · · k p n [[ P n ]] with p i , p j for any i , j, and M is a message queue. We assume the standard structural congruence on networks, stating that parallel com-position is associative and commutative and has neutral element p [[ ]] for any fresh p .If P , we write p [[ P ]] ∈ N as short for N ≡ p [[ P ]] k N ′ for some N ′ . This abbreviationis justified by the associativity and commutativity of k .To define the operational semantics of networks, we use an LTS whose transitions aredecorated by inputs or outputs. Therefore, we define the set of input / output communications (communications for short), ranged over by β , β ′ , to be { pq ! λ, pq ? λ | p , q ∈ Part , λ ∈ Lab } ,where pq ! λ represents the emission of a label λ from participant p to participant q , and pq ? λ the actual reading by participant q of the label λ sent by participant p . To memorise thisnotation, it is helpful to view pq as the channel from p to q and the exclamation / questionmark as the mode (write / read) in which the channel is used. The LTS semantics of networksis specified by the two Rules [S end ] and [R cv ] given in Figure 1. Rule [S end ] allows aparticipant p with an internal choice (a sender) to send to a participant q one of its possiblelabels λ i by adding it to the queue. Symmetrically, Rule [R cv ] allows a participant q withan external choice (a receiver) to read the first label λ k sent to her by participant p , providedthis label is among the λ j ’s specified in the choice. Thanks to structural equivalence, thismessage can always be moved to the top of the queue.A key role in this paper is played by (possibly empty) sequences of communications.As usual we define them as traces. Definition 2.3 (Traces) . (Finite) traces are defined by: τ :: = ǫ | β · τ We use | τ | to denote the length of the trace τ . When τ = β · . . . · β n ( n ≥
1) we write N k M τ −→ N ′ k M ′ as short for N k M β −→ N k M · · · N n − k M n − β n −→ N ′ k M ′ In the following example, we consider the semantics of the network N k ∅ discussed inthe introduction. Example 2.4 .
Consider the network: N k ∅ , where N = p [[ q ! λ ; q ? λ ′ ]] k q [[ p ! λ ′ ; p ? λ ]] . Then N k ∅ can move by first performing the two sends, in any order, and then the two reads, in any order. possible execution of N k ∅ is: N k ∅ pq ! λ −−−→ p [[ q ? λ ′ ]] k q [[ p ! λ ′ ; p ? λ ]] k h p , λ, q i qp ! λ ′ −−−−→ p [[ q ? λ ′ ]] k q [[ p ? λ ]] k h p , λ, q i · h q , λ ′ , p i≡ p [[ q ? λ ′ ]] k q [[ p ? λ ]] k h q , λ ′ , p i · h p , λ, q i qp ? λ ′ −−−−→ p [[ ]] k q [[ p ? λ ]] k h p , λ, q i pq ? λ −−−→ p [[ ]] k q [[ ]] k ∅ Note the use of ≡ , allowing label λ ′ to be read by p before label λ is read by q . We now introduce the notion of player, which will be extensively used in the rest ofthe paper. A player of a communication β is a participant who is active in β . Definition 2.5 (Players of communications) . We denote by play ( β ) the sets of players ofcommunication β defined by play ( pq ! λ ) = { p } play ( pq ? λ ) = { q } The function play is extended to traces in the obvious way: play ( ǫ ) = ∅ play ( β · τ ) = play ( β ) ∪ play ( τ )In Section 6 we will use the same notation for the players of a global type. In all cases,the context should make it easy to understand which function is in use.3. E vent S tructures We recall now the definitions of
Prime Event Structure (PES) from [NPW81] and
Flow EventStructure (FES) from [BC88]. The class of FESs is more general than that of PESs: for aprecise comparison of various classes of event structures, we refer the reader to [BC91].As we shall see in Sections 4 and 5, while PESs are su ffi cient to interpret processes, thegenerality of FESs is needed to interpret networks. Definition 3.1 (Prime Event Structure) . A prime event structure (PES) is a tuple S = ( E , ≤ , where: (1) E is a denumerable set of events; (2) ≤ ⊆ ( E × E ) is a partial order relation, called the causality relation; (3) ⊆ ( E × E ) is an irreflexive symmetric relation, called the conflict relation, satisfying theproperty: ∀ e , e ′ , e ′′ ∈ E : e e ′ ≤ e ′′ ⇒ e e ′′ ( conflict hereditariness ). We say that two events are concurrent if they are neither causally related nor in conflict.
Definition 3.2 (Flow Event Structure) . A flow event structure (FES) is a tuple S = ( E , ≺ , where: (1) E is a denumerable set of events; (2) ≺ ⊆ ( E × E ) is an irreflexive relation, called the flow relation; (3) ⊆ ( E × E ) is a symmetric relation, called the conflict relation. Note that the flow relation is not required to be transitive, nor acyclic (its reflexiveand transitive closure is just a preorder, not necessarily a partial order). Intuitively, theflow relation represents a possible direct causality between two events. Observe also thatin a FES the conflict relation is not required to be irreflexive nor hereditary; indeed, FESs ay exhibit self-conflicting events, as well as disjunctive causality (an event may haveconflicting causes).Any PES S = ( E , ≤ , ≺ given by < (the strict ordering)or by the covering relation of ≤ .We now recall the definition of configuration for event structures. Intuitively, a con-figuration is a set of events having occurred at some stage of the computation. Thus, thesemantics of an event structure S is given by its poset of configurations ordered by setinclusion, where X ⊂ X means that S may evolve from X to X . Definition 3.3 (PES Configuration) . Let S = ( E , ≤ , be a prime event structure. A configura-tion of S is a finite subset X of E such that: (1) X is downward-closed: e ′ ≤ e ∈ X ⇒ e ′ ∈ X ; (2) X is conflict-free: ∀ e , e ′ ∈ X , ¬ ( e e ′ ) . The definition of configuration for FESs is slightly more elaborated. For a subset X of E , let ≺ X be the restriction of the flow relation to X and ≺ ∗X be its transitive and reflexiveclosure. Definition 3.4 (FES Configuration) . Let S = ( E , ≺ , be a flow event structure. A configurationof S is a finite subset X of E such that: (1) X is downward-closed up to conflicts: e ′ ≺ e ∈ X , e ′ < X ⇒ ∃ e ′′ ∈ X . e ′ e ′′ ≺ e; (2) X is conflict-free: ∀ e , e ′ ∈ X , ¬ ( e e ′ ) ; (3) X has no causality cycles: the relation ≺ ∗X is a partial order. Condition (2) is the same as for prime event structures. Condition (1) is adapted toaccount for the more general – non-hereditary – conflict relation. It states that any eventappears in a configuration with a “complete set of causes”. Condition (3) ensures that anyevent in a configuration is actually reachable at some stage of the computation.If S is a prime or flow event structure, we denote by C ( S ) its set of configurations. Then,the domain of configurations of S is defined as follows: Definition 3.5 (ES Configuration Domain) . Let S be a prime or flow event structure withset of configurations C ( S ) . The domain of configurations of S is the partially ordered set D ( S ) = def ( C ( S ) , ⊆ ) . We recall from [BC91] a useful characterisation for configurations of FESs, which isbased on the notion of proving sequence, defined as follows:
Definition 3.6 (Proving Sequences) . Given a flow event structure S = ( E , ≺ , , a provingsequence in S is a sequence e ; · · · ; e n of distinct non-conflicting events (i.e. i , j ⇒ e i , e j and ¬ ( e i e j ) for all i , j) satisfying: ∀ i ≤ n ∀ e ∈ E : e ≺ e i ⇒ ∃ k < i . either e = e k or e e k ≺ e i Note that any prefix of a proving sequence is itself a proving sequence.We have the following characterisation of configurations of FESs in terms of provingsequences.
Proposition 3.7 (Representation of configurations as proving sequences [BC91]) . Given aflow event structure S = ( E , ≺ , , a subset X of E is a configuration of S if and only if it can beenumerated as a proving sequence e ; · · · ; e n . ince PESs may be viewed as particular FESs, we may use Definition 3.6 and Propo-sition 3.7 both for the FESs associated with networks (see Section 5) and for the PESsassociated with asynchronous global types (see Section 7). Note that for a PES the condi-tion of Definition 3.6 simplifies to ∀ i ≤ n ∀ e ∈ E : e < e i ⇒ ∃ k < i . e = e k To conclude this section, we recall from [CZ97] the definition of downward surjectivity (or downward-onto , as it was called there), a property that is required for partial functionsbetween two FESs in order to ensure that they preserve configurations. We will make useof this property in Section 5.
Definition 3.8 (Downward surjectivity) . Let S i = ( E i , ≺ i , i ) , be a flow event structure, i = , .Let e i , e ′ i range over E i , i = , . A partial function f : E → E is downward surjective if itsatisfies the condition: e ≺ f ( e ) = ⇒ ∃ e ′ ∈ E . e = f ( e ′ )4. E vent S tructure S emantics of P rocesses In this section, we present an ES semantics for processes, and show that the obtained ESsare PESs. This semantics, which is borrowed from our companion paper [CDG19a], willbe the basis for defining the ES semantics for networks in Section 5.We start by introducing process events, which are non-empty sequences of atomicactions π as defined at the beginning of Section 2. Definition 4.1 (Process event) . Process events (p-events for short) η, η ′ are defined by: η :: = π | π · η We denote by PE the set of p-events. Let ζ denote a (possibly empty) sequence of actions, and ⊑ denote the prefix orderingon such sequences. Each p-event η may be written either in the form η = π · ζ or in theform η = ζ · π . We shall feel free to use any of these forms. When a p-event is written as η = ζ · π , then ζ may be viewed as the causal history of η , namely the sequence of actionsthat must have been executed by the process for η to be able to happen.We define the action of a p-event to be its last atomic action: act ( ζ · π ) = π A p-event η is an output p-event if act ( η ) is an output and an input p-event if act ( η ) is aninput. Definition 4.2 (Causality and conflict relations on p-events) . The causality relation ≤ and the conflict relation on the set of p-events PE are defined by: (1) η ⊑ η ′ ⇒ η ≤ η ′ ; (2) π , π ′ ⇒ ζ · π · ζ ′ ζ · π ′ · ζ ′′ . Definition 4.3 (Event Structure of a process) . The event structure of process
P is the triple S P ( P ) = ( PE ( P ) , ≤ P , P ) where: (1) PE ( P ) ⊆ PE is the set of sequences of decorations along the nodes and edges of a path from theroot to an edge in the tree of P; (2) ≤ P is the restriction of ≤ to the set PE ( P ) ; (3) P is the restriction of to the set PE ( P ) . n the following we shall feel free to drop the subscript in ≤ P and P .Note that the set PE ( P ) may be denumerable, as shown by the following example. Example 4.4 .
If P = q ! λ ; P ⊕ q ! λ ′ , then PE ( P ) = { q ! λ · . . . · q ! λ | {z } n | n ≥ } ∪ { q ! λ · . . . · q ! λ | {z } n · q ! λ ′ | n ≥ } We conclude this section by showing that the ESs of processes are PESs.
Proposition 4.5 .
Let P be a process. Then S P ( P ) is a prime event structure with an emptyconcurrency relation.Proof. We show that ≤ and ≤ follow from the corresponding properties of ⊑ . Asfor irreflexivity and symmetry of η η ′ ≤ η ′′ . FromClause (2) of Definition 4.2 there are π , π ′ , ζ , ζ ′ and ζ ′′ such that π , π ′ and η = ζ · π · ζ ′ and η ′ = ζ · π ′ · ζ ′′ . From η ′ ≤ η ′′ we derive that η ′′ = ζ · π ′ · ζ ′′ · ζ for some ζ . Thereforeagain from Clause (2) we obtain η η ′′ .5. E vent S tructure S emantics of N etworks We present now the ES semantics of networks. In the ES of a network, asynchronouscommunication will be modelled by two causally related events, the first representing anasynchronous output, namely the enqueuing of a message in the queue, and the secondrepresenting an asynchronous input, namely the dequeuing of a message from the queue.We start by defining the o-trace of a queue M , notation otr ( M ), which is the sequence ofoutput communications in the queue. We use ω to range over o-traces. Definition 5.1 .
The o-trace corresponding to a queue is defined by otr ( ∅ ) = ǫ otr ( h p , λ, q i · M ) = pq ! λ · otr ( M )O-traces are considered modulo the following equivalence (cid:27) , which mimics the struc-tural equivalence on queues. Definition 5.2 (o-trace equivalence (cid:27) ) . The equivalence (cid:27) on o-traces is the least equivalencesuch that ω · pq ! λ · rs ! λ ′ · ω ′ (cid:27) ω · rs ! λ ′ · pq ! λ · ω ′ if p , r or q , s Network events are relative to a given participant p . They are pairs made of an o-trace ω modulo (cid:27) (notation ω (cid:27) ), representing a queue modulo ≡ , and of a p-event η . Definition 5.3 (Network events) . (1) Network events ρ, ρ ′ , also called n-events, are pairs ( ω (cid:27) , η ) located at some participant p , written p :: ( ω (cid:27) , η ) . We call ω (cid:27) the queue and η thep-event of ρ , respectively. (2) We define i / o ( ρ ) = pq ! λ if ρ = p :: ( ω (cid:27) , ζ · q ! λ ) pq ? λ if ρ = q :: ( ω (cid:27) , ζ · p ? λ ) and we say that ρ is an output n-event representing the communication pq ! λ or is an inputn-event representing the communication pq ? λ , respectively. (3) We denote by NE the set of n-events. n order to define the flow relation between an output n-event p :: ( ω (cid:27) , ζ · q ! λ ) andthe matching input n-event q :: ( ω (cid:27) , ζ · p ? λ ), we introduce a duality relation on projectionsof action sequences, see Definition 5.5. We first define the projection of traces on partic-ipants, producing action sequences (Definition 5.4(1)), and then the projection of actionsequences on participants, producing sequences of undirected actions of the form ! λ and ? λ (Definition 5.4(2)).In the sequel, we will use the symbol † to stand for either ! or ?. Then p † λ will standfor either p ! λ or p ? λ . Similarly, † λ will stand for either ! λ or ? λ . Definition 5.4 (Projections) . (1) The projection of a trace on a participant is defined by: ǫ @ r = ǫ ( β · τ ) @ r = q ! λ · τ @ r if β = rq ! λ p ? λ · τ @ r if β = pr ? λτ @ r otherwise (2) The projection of an action sequence on a participant is defined by: ǫ (cid:31) r = ǫ ( π · ζ ) (cid:31) r = † λ · ζ (cid:31) r if π = r † λζ (cid:31) r otherwise We use χ to range over sequences of output actions and ϑ to range over sequences ofundirected actions.We now introduce a partial order relation v on sequences of undirected actions, whichreflects the fact that in an asynchronous semantics it is better to anticipate outputs, asfirst observed in [MYH09]. This relation, as well as the standard duality relation Z onprojections, will be used to define our specific duality relation v Z w on projections of actionsequences. Definition 5.5 (Partial order and duality relations on undirected action sequences) . Thethree relations v , Z and v Z w on undirected action sequences are defined as follows: (1) The relation v on undirected action sequences is defined as the smallest partial order such that: ϑ · ! λ · ? λ ′ · ϑ ′ v ϑ · ? λ ′ · ! λ · ϑ ′ (2) The relation Z on undirected action sequences is defined by: ǫ Z ǫ ϑ Z ϑ ′ ⇒ ! λ.ϑ Z ? λ.ϑ ′ and ? λ.ϑ Z ! λ.ϑ ′ (3) The relation v Z w on undirected action sequences is defined by: ϑ v Z w ϑ if ϑ ′ Z ϑ ′ for some ϑ ′ , ϑ ′ such that ϑ v ϑ ′ and ϑ v ϑ ′ For example ! λ · ! λ · ? λ v ? λ · ! λ · ! λ , which implies ! λ · ! λ · ? λ v Z w ! λ · ? λ · ? λ .We may now define the flow and conflict relations on n-events. Definition 5.6 (Flow and conflict relations on n-events) . The flow relation ≺ and the conflict relation on the set of n-events NE are defined by: (1) (a) η < η ′ ⇒ p :: ( ω (cid:27) , η ) ≺ p :: ( ω (cid:27) , η ′ ) ; (b) ( ω @ p · ζ ) (cid:31) q v Z w ( ω @ q · ζ ′′ ) (cid:31) p and ( ζ ′ · p ? λ ) (cid:31) p v ( ζ ′′ · p ? λ · χ ) (cid:31) p for some ζ ′′ and χ ⇒ p :: ( ω (cid:27) , ζ · q ! λ ) ≺ q :: ( ω (cid:27) , ζ ′ · p ? λ ) ; (2) η η ′ ⇒ p :: ( ω (cid:27) , η ) p :: ( ω (cid:27) , η ′ ) . Clause (1a) defines flows within the same “locality” p , which we call local flows , whileClause (1b) defines flows between di ff erent localities, which we call cross-flows : these areflows between an output of p towards q and the corresponding input of q from p . Thecondition in this clause expresses a sort of “weak duality” between the history of the outputand the history of the input: the intuition is that if q has some outputs towards p occurring n ζ ′ , namely before its input p ? λ , then when checking for duality these outputs can bemoved after p ? λ , namely in χ , because q does not need to wait until p has consumed theseoutputs to perform its input p ? λ . This condition can be seen at work in Examples 5.10and 5.11.The reason for using the same o-traces when defining the flow and conflict relations isthat in the ES of a network they will be images through otr of the same queue.For example, we have a cross-flow ρ ≺ ρ ′ between the following n-events, where ω = pq ! λ · pq ! λ · qs ! λ · qp ! λ : ρ = p :: ( ω (cid:27) , r ? λ · q ? λ · q ! λ ) ≺ q :: ( ω (cid:27) , p ! λ ′ · p ? λ · p ? λ · p ? λ ) = ρ ′ since in this case ζ = r ? λ · q ? λ and ζ ′ = p ! λ ′ · p ? λ · p ? λ , and thus, taking ζ ′′ = p ? λ · p ? λ and χ = p ! λ ′ , we obtain( ω @ p · ζ ) (cid:31) q = ! λ · ! λ · ? λ v ? λ · ! λ · ! λ Z ! λ · ? λ · ? λ = ( ω @ q · ζ ′′ ) (cid:31) p and ( ζ ′ · p ? λ ) (cid:31) p = ! λ ′ · ? λ · ? λ · ? λ v ? λ · ? λ · ? λ · ! λ ′ = ( ζ ′′ · p ? λ · χ ) (cid:31) p When ρ = p :: ( ω (cid:27) , η ) ≺ q :: ( ω (cid:27) , η ′ ) = ρ ′ and p , q , then by definition ρ is an output and ρ ′ is an input. In this case we say that the output ρ justifies the input ρ ′ , or symmetricallythat the input ρ ′ is justified by the output ρ .An input n-event may also be justified by a message in the queue. This is formalisedby the following definition. Definition 5.7 (Queue-justified n-events) . The input n-event ρ = q :: (( ω · pq ! λ · ω ′ ) (cid:27) , ζ · p ? λ ) is queue-justified if ( ω @ p ) (cid:31) q v Z w ζ ′ (cid:31) p and ( ζ · p ? λ ) (cid:31) p v ( ζ ′ · p ? λ · χ ) (cid:31) p for some ζ ′ and χ . The conditions ( ω @ p ) (cid:31) q v Z w ζ ′ (cid:31) p and ( ζ · p ? λ ) (cid:31) p v ( ζ ′ · p ? λ · χ ) (cid:31) p for some ζ ′ and χ ensure that the input will consume the shown message in the queue. For example,if ω = pq ! λ · pq ! λ , then both q :: ( ω (cid:27) , p ! λ ′ · p ? λ ) and q :: ( ω (cid:27) , p ! λ ′ · p ? λ · p ? λ ) are queue-justified. On the other hand, if ω = pq ! λ then q :: ( ω (cid:27) , p ? λ · p ? λ ) is not queue-justified.To define the set of n-events associated with a network, we filter the set of all itspotential n-events by keeping only– those n-events whose constituent p-events have all their predecessors appearing in someother n-event of the network and– those input n-events that are either queue justified or justified by output n-events of thenetwork.Given a set E of n-events, we define the narrowing of E (notation nr ( E )) as the greatestfixpoint of the function f E on sets of n-events defined by: f E ( X ) = { ρ ∈ E | ρ = p :: ( ω (cid:27) , η · π ) ⇒ ∃ ρ ′ ∈ X . ρ ′ = p :: ( ω (cid:27) , η ) and( ρ is an input n-event ⇒ ρ is either queue-justified or justified by some ρ ′′ ∈ X ) } Thus, nr ( E ) is the greatest set X ⊆ E such that X = f E ( X ).Note that we could not have taken nr ( E ) to be the least fixpoint of f E rather than itsgreatest fixpoint. Indeed, the least fixpoint of f E would be the empty set.We have now enough machinery to define the ES of networks. Definition 5.8 (Event Structure of a network) . The event structure of network N k M is thetriple: S N ( N k M ) = ( NE ( N k M ) , ≺ , where NE ( N k M ) = nr ( { p :: ( ω (cid:27) , η ) | ω = otr ( M ) and η ∈ PE ( P ) with p [[ P ]] ∈ N } ) ; (2) ≺ N kM is the restriction of ≺ to the set NE ( N k M ) ; (3) N kM is the restriction of to the set NE ( N k M ) . The following example shows how the operation of narrowing prunes the set of poten-tial n-events of a network ES. It also illustrates the interplay between the two conditions inthe definition of narrowing.
Example 5.9 .
Consider the network N k ∅ , where N = p [[ q ? λ ; r ! λ ′ ]] k r [[ p ? λ ′ ]] . The set ofpotential n-events of S N ( N k ∅ ) is { p :: ( ǫ, q ? λ ) , p :: ( ǫ, q ? λ ; r ! λ ′ ) , r :: ( ǫ, p ? λ ′ ) } . The n-event p :: ( ǫ, q ? λ ) is cancelled, since it is neither queue-justified nor justified by another n-event of the ES.Then p :: ( ǫ, q ? λ ; r ! λ ′ ) is cancelled since it lacks its predecessor p :: ( ǫ, q ? λ ) . Lastly r :: ( ǫ, p ? λ ′ ) iscancelled, since it is neither queue-justified nor justified by another n-event of the ES. Notice that p :: ( ǫ, q ? λ ; r ! λ ′ ) would have justified r :: ( ǫ, p ? λ ′ ) , if it had not been cancelled. We conclude that NE ( N k ∅ ) = ∅ . Example 5.10 .
Consider the ES associated with the network N k ∅ , with N = p [[ q ! λ ; q ? λ ′ ; q ! λ ; q ? λ ′ ]] k q [[ p ! λ ′ ; p ? λ ; p ! λ ′ ; p ? λ ]] The n-events of S N ( N k ∅ ) are: ρ = p :: ( ǫ, q ! λ ) ρ ′ = q :: ( ǫ, p ! λ ′ ) ρ = p :: ( ǫ, q ! λ · q ? λ ′ ) ρ ′ = q :: ( ǫ, p ! λ ′ · p ? λ ) ρ = p :: ( ǫ, q ! λ · q ? λ ′ · q ! λ ) ρ ′ = q :: ( ǫ, p ! λ ′ · p ? λ · p ! λ ′ ) ρ = p :: ( ǫ, q ! λ · q ? λ ′ · q ! λ · q ? λ ′ ) ρ ′ = q :: ( ǫ, p ! λ ′ · p ? λ · p ! λ ′ · p ? λ ) The flow relation is given by the cross-flows ρ ≺ ρ ′ , ρ ≺ ρ ′ , ρ ′ ≺ ρ , ρ ′ ≺ ρ , as well as by thelocal flows ρ i ≺ ρ j and ρ ′ i ≺ ρ ′ j for all i , j such that i ∈ { , , } , j ∈ { , , } and i < j. The conflictrelation is empty.The configurations of S N ( N k ∅ ) are: { ρ } { ρ ′ } { ρ , ρ ′ } { ρ , ρ ′ , ρ } { ρ , ρ ′ , ρ ′ } { ρ , ρ ′ , ρ , ρ ′ }{ ρ , ρ ′ , ρ , ρ } { ρ , ρ ′ , ρ ′ , ρ ′ } { ρ , ρ ′ , ρ , ρ ′ , ρ } { ρ , ρ ′ , ρ , ρ ′ , ρ ′ } { ρ , ρ ′ , ρ , ρ ′ , ρ , ρ ′ }{ ρ , ρ ′ , ρ , ρ ′ , ρ , ρ ′ , ρ } { ρ , ρ ′ , ρ , ρ ′ , ρ , ρ ′ , ρ ′ } { ρ , ρ ′ , ρ , ρ ′ , ρ , ρ ′ , ρ , ρ ′ } The network N k ∅ can evolve in two steps to the network: N ′ k M ′ = p [[ q ? λ ′ ; q ! λ ; q ? λ ′ ]] k q [[ p ? λ ; p ! λ ′ ; p ? λ ]] k h p , λ, q i · h q , λ ′ , p i The n-events of S N ( N ′ k M ′ ) are: ρ = p :: ( ω (cid:27) , q ? λ ′ ) ρ ′ = q :: ( ω (cid:27) , p ? λ ) ρ = p :: ( ω (cid:27) , q ? λ ′ · q ! λ ) ρ ′ = q :: ( ω (cid:27) , p ? λ · p ! λ ′ ) ρ = p :: ( ω (cid:27) , q ? λ ′ · q ! λ · q ? λ ′ ) ρ ′ = q :: ( ω (cid:27) , p ? λ · p ! λ ′ · p ? λ ) where ω = pq ! λ · qp ! λ ′ . The flow relation is given by the cross-flows ρ ≺ ρ ′ , ρ ′ ≺ ρ , and bythe local flows ρ i ≺ ρ j and ρ ′ i ≺ ρ ′ j for all i , j such that i ∈ { , } , j ∈ { , } and i < j. The inputn-events ρ and ρ ′ , which are the only ones without causes, are queue-justified. The conflict relationis empty.The network N ′ k M ′ can evolve in five steps to the network: N ′′ k M ′′ = q [[ p ? λ ]] k h p , λ, q i The only n-event of S N ( N ′′ k M ′′ ) is q :: ( pq ! λ, p ? λ ) . Example 5.11 .
Let N = p [[ q ! λ ; r ! λ ⊕ q ! λ ; r ! λ ]] k q [[ p ? λ + p ? λ ]] k r [[ p ? λ ]] . The n-events of S N ( N k ∅ ) are: = p :: ( ǫ, q ! λ ) ρ ′ = q :: ( ǫ, p ? λ ) ρ = p :: ( ǫ, q ! λ · r ! λ ) ρ ′ = q :: ( ǫ, p ? λ ) ρ = p :: ( ǫ, q ! λ ) ρ ′′ = r :: ( ǫ, p ? λ ) ρ = p :: ( ǫ, q ! λ · r ! λ ) The flow relation is given by the local flows ρ ≺ ρ , ρ ≺ ρ , and by the cross-flows ρ ≺ ρ ′ , ρ ≺ ρ ′′ , ρ ≺ ρ ′ , ρ ≺ ρ ′′ . The conflict relation is given by ρ ρ , ρ ρ , ρ ρ , ρ ρ and ρ ′ ρ ′ . Notice that ρ and ρ are conflicting causes of ρ ′′ . The configurations are { ρ } { ρ , ρ } { ρ , ρ ′ } { ρ , ρ , ρ ′ } { ρ , ρ , ρ ′′ } { ρ , ρ , ρ ′ , ρ ′′ }{ ρ } { ρ , ρ } { ρ , ρ ′ } { ρ , ρ , ρ ′ } { ρ , ρ , ρ ′′ } { ρ , ρ , ρ ′ , ρ ′′ } The network N k M can evolve in one step to the network: N ′ k M ′ = p [[ r ! λ ]] k q [[ p ? λ + p ? λ ]] k r [[ p ? λ ]] k h p , λ , q i The n-events of S N ( N ′ k M ′ ) are ρ = p :: ( ω, r ! λ ) , ρ ′ = q :: ( ω, p ? λ ) and ρ ′′ = r :: ( ω, p ? λ ) ,where ω = pq ! λ . The flow relation is given by the cross-flow ρ ≺ ρ ′′ . Notice that the inputn-event ρ ′ is queue-justified, and that there is no n-event corresponding to the branch p ? λ of q , since such an n-event would not be queue-justified. Hence the conflict relation is empty. Theconfigurations are { ρ } { ρ ′ } { ρ , ρ ′ } { ρ , ρ ′′ } { ρ , ρ ′ , ρ ′′ } It is easy to show that the ESs of networks are FESs.
Proposition 5.12 .
Let N k M be a network. Then S N ( N k M ) is a flow event structure.Proof. The relation ≺ is irreflexive since:(1) η < η ′ implies p :: ( ω (cid:27) , η ) , p :: ( ω (cid:27) , η ′ );(2) p , q implies p :: ( ω (cid:27) , ζ · q ! λ ) , q :: ( ω (cid:27) , ζ ′ · p ? λ ).Symmetry of the conflict relation between n-events follows from the corresponding prop-erty of conflict between p-events.In the remainder of this section we show that projections of n-event configurationsgive p-event configurations. We start by formalising the projection function of n-events top-events and showing that it is downward surjective. Definition 5.13 (Projection of n-events to p-events) . The projection function pro j p ( · ) is definedby: pro j p ( ρ ) = η if ρ = p :: ( ω (cid:27) , η ) undefined otherwiseThe projection function pro j p ( · ) is extended to sets of n-events in the obvious way:pro j p ( X ) = { η | ∃ ρ ∈ X . pro j p ( ρ ) = η } Proposition 5.14 (Downward surjectivity of projections) . Let p [[ P ]] ∈ N , S N ( N k M ) = ( NE ( N k M ) , ≺ , and S P ( P ) = ( PE ( P ) , ≤ P , P ) Then the partial function pro j p : NE ( N k M ) → PE ( P ) is downward surjective.Proof. Follows immediately from the fact that NE ( N k M ) is the narrowing of a set ofn-events p :: ( ω (cid:27) , η ) with ω = otr ( M ) and p [[ P ]] ∈ N and η ∈ PE ( P ).The operation of narrowing on network events makes sure that each configuration ofthe ES of a network projects down to configurations of the ESs of the component processes. roposition 5.15 (Projection preserves configurations) . Let p [[ P ]] ∈ N . If X ∈ C ( S N ( N k M )) ,then pro j p ( X ) ∈ C ( S P ( P )) .Proof. Let
X ∈ C ( S N ( N k M )) and Y = pro j p ( X ). We want to show that Y ∈ C ( S P ( P )),namely that Y satisfies Conditions (1) and (2) of Definition 3.3.(1) Downward-closure.
Let η ∈ Y . Since Y = pro j p ( X ), there exists ρ ∈ X such that ρ = p :: ( ω (cid:27) , η ). Suppose η ′ < η . From Proposition 5.14 there exists ρ ′ ∈ NE ( N k M )such that ρ ′ = p :: ( ω (cid:27) , η ′ ). By Definition 5.8(1) we have then ρ ′ ≺ ρ . Since X isleft-closed up to conflicts, we know that either ρ ′ ∈ X or there exists ρ ′′ ∈ X such that ρ ′′ ρ ′ and ρ ′′ ≺ ρ . We examine the two cases in turn:– ρ ′ ∈ X . Then, since η ′ = pro j p ( ρ ′ ), we have η ′ ∈ pro j p ( X ) = Y and we are done.– ∃ ρ ′′ ∈ X . ρ ′′ ρ ′ and ρ ′′ ≺ ρ . From ρ ′′ ρ ′ we get ρ ′′ = p :: ( ω (cid:27) , η ′′ ) and η ′′ η ′ . Thisimplies η ′′ η . By Definition 5.8(2) this implies ρ ρ ′ , contradicting the hypothesisthat X is conflict-free. So this case is impossible.(2) Conflict-freeness.
Ad absurdum, suppose there exist η, η ′ ∈ Y such that η η ′ . Then, since Y = pro j p ( X ), there must exist ρ, ρ ′ ∈ X such that ρ = p :: ( ω (cid:27) , η ) and ρ ′ = p :: ( ω (cid:27) , η ′ ). ByDefinition 5.8(2) this implies ρ ρ ′ , contradicting the hypothesis that X is conflict-free.6. A synchronous G lobal T ypes In this section we introduce our new global types for asynchronous communication. Theunderlying idea is quite simple: to split the communication constructor of standard globaltypes into an output constructor and an input constructor. This will allow us to typenetworks in which all participants make all their outputs before their inputs, like thenetwork of Example 2.4, whose asynchronous global types will be presented in Example 6.8.
Definition 6.1 (Global types) . (1) Pre-sequential global types (pre-sgts) are defined coinduc-tively by: G :: = coind pq ! ⊞ i ∈ I λ i ; G i | pq ? λ ; G | End where I is non-empty and λ j , λ h for all j , h ∈ I, j , h, i.e. labels in choices are all di ff erent.The pre-sgt pq ! ⊞ i ∈ I λ i ; G i specifies a send from p to q of one of the labels λ i , followed by thebehaviour specified by G i . Dually, the pre-sgt pq ? λ ; G specifies a read by q of the label λ sentby p , followed by the behaviour specified by G . (2) The tree representation of a pre-sgt is a directed rooted tree, where: (a) each internal noderepresents either a choice pq ! ⊞ i ∈ I λ i ; G i , in which case it is decorated by pq ! and has as manychildren as there are branches in the choice, or an input pq ? λ ; G , in which case it is decorated by pq ? and has a unique child, and (b) the edge from pq ! to the child G i is decorated by λ i and theone from pq ? to the child G is decorated by λ , and (c) the leaves of the tree (if any) are decoratedby End . The tree representation of
End has a unique node which is a leaf. (3)
We say that a pre-sgt G is a sequential global type (sgt) if the tree representation of G isregular (namely, it has finitely many distinct sub-trees). (4) Parallel global types (pgts) are defined by: G :: = ind G | G k G where G is an sgt. (5) Asynchronous global types (agts) are pairs made of a pgt and a queue, written G k M . (6) An asynchronous global type is simple if its pgt is an sgt, namely if it has the form G k M . ↾ r = if r < play ( G )( pq ! ⊞ i ∈ I λ i ; G i ) ↾ r = L i ∈ I q ! λ i ; G i ↾ p if r = p , G ↾ q if r = q and | I | = −→ π ; Σ i ∈ I p ? λ i ; P i if r = q and | I | > G i ↾ q = −→ π ; p ? λ i ; P i , G ↾ r if r < { p , q } and r ∈ play ( G ) and G i ↾ r = G ↾ r for all i ∈ I ( pq ? λ ; G ) ↾ r = p ? λ ; G ↾ r if r = qG ↾ r if r , q and r ∈ play ( G )( G k G ) ↾ r = G i ↾ r if r does not occur in G j for { i , j } = { , } Figure 2:
Projection of pgts onto participants.Given an sgt G , the sequences of decorations of nodes and edges on the path from theroot to an edge of the tree of G are traces, in the sense of Definition 2.3. We denote by Tr + ( G )the set of these traces. By definition, Tr + ( End ) = ∅ and each trace in Tr + ( G ) is non-empty.As may be expected, networks will be typed by agts, see Figure 4. A standard guaranteethat should be ensured by agts is that each participant whose behaviour is not terminatedcan do some action. Moreover, since communications are split into inputs and outputs inthe syntax of agts, we must make sure that each input has a matching output in the type,and vice versa. To account for all these requirements we will impose a well-formednesscondition on agts.We start by defining the projections of pgts onto participants (Figure 2). We proceedby defining the depth of participants in sgts (Definition 6.2) and the input / output matchingpredicate (Figure 3). We can then present the typing rules (Figure 4). For establishing theexpected properties of the type system we introduce an LTS for simple agts (Figure 5) andshow that well-formedness of simple agts is preserved by transitions (Lemma 6.14).This section is divided in two subsections, the first focussing on well-formedness andthe second presenting the type system and showing that it enjoys the properties of subjectreduction and session fidelity and that moreover it ensures progress.6.1. Well-formed Global Types.
We start by formalising the set of players of pgts, which will be largely used in thedefinitions and results presented in this section.The set of players of a pgt G , play ( G ), is the smallest set such that: play ( pq ! ⊞ i ∈ I λ i ; G i ) = { p } ∪ S i ∈ I play ( G i ) play ( pq ? λ ; G ) = { q } ∪ play ( G ) play ( End ) = ∅ play ( G k G ′ ) = play ( G ) ∪ play ( G ′ )The regularity assumption ensures that the set of players of a pgt is finite.As mentioned earlier, the projection of pgts on participants yields processes. Its coin-ductive definition is given in Figure 2, where we use −→ π to denote any sequence, possiblyempty, of input / output actions separated by “;” (note the di ff erence with the sequences ζ efined after Definition 4.1, where actions are separated by “ · ”).The projection of an sgt on a participant which is not a player of the type is the inactiveprocess . In particular, the projection of End is on all participants.The projection of an output choice type on the sender produces an output process sendingone of its possible labels to the receiver and then acting according to the projection of thecorresponding branch.The projection of an output choice type on the receiver q has two clauses: one for the casewhere the choice has a single branch, and one for the case where the choice has more thanone branch. In the first case, the projection is simply the projection of the continuation ofthe single branch on q . In the second case, the projection is defined if the projection of thecontinuation of each branch on q starts with the same sequence of actions −→ π , followed byan input of the label sent by p on that branch and then by a possibly di ff erent process ineach branch. In fact, participant q must receive the label chosen by participant p beforebehaving di ff erently in di ff erent branches. The projection on q is then the initial sequenceof actions −→ π followed by an external choice on the di ff erent sent labels. The sequence −→ π isallowed to contain another input of a (possibly equal) label from p , for example:( pq ! λ ; pq ! λ ; pq ? λ ; pq ? λ ; pq ? λ ⊞ pq ! λ ; pq ! λ ′ ; pq ? λ ; pq ? λ ; pq ? λ ′ ) ↾ q = p ? λ ; ( p ? λ ; p ? λ + p ? λ ; p ? λ ′ )In Example 6.15 we will show why we need to distinguish these two cases.The projection of an output choice type on the other participants is defined only if it pro-duces the same process for all branches of the choice.The projection of an input type on the receiver is an input action followed by the projectionof the rest of the type. For the other participants the projection is simply the projection ofthe rest of the type.The projection of a parallel composition of pgts on a participant r is undefined if r occursin both pgts and it is equal to if r does not occur in any of them (because in this case wemay take G i to be any of the two pgts, since the projection of both of them on r yields ).We need to show that projection is well defined, i.e. that it is a partial function. The proofis easier for pgts which are bounded according to Definition 6.2, see Lemma 6.5.We discuss now how to ensure that each player will eventually do some communication.We require that the first occurrence of each player of a pgt appears at a bounded depthin all its traces. This condition is su ffi cient, as shown by the proof of progress for typednetworks (Theorem 6.20). To formalise it, we define the depth of a player p in an sgt G , depth ( G , p ), which uses the length function | | of Definition 2.3, the function play given afterDefinition 2.5 and the new function ord given below. Definition 6.2 (Depth) . Let the two functions ord ( τ, p ) and depth ( G , p ) be defined by: ord ( τ, p ) = n if τ = τ · β · τ and | τ | = n − and p < play ( τ ) and p ∈ play ( β )0 otherwise depth ( G , p ) = sup { ord ( τ, p ) | τ ∈ Tr + ( G ) } if p ∈ play ( G )0 otherwiseWe say that a pgt G is bounded if depth ( G , p ) is finite for all sub-trees G of G and for all p . To show that G is bounded it is enough to check depth ( G , p ) for all p ∈ play ( G ), since forany other p we have depth ( G , p ) = ote that the depth of a participant which is a player of G does not necessarily decreasein the subtrees of G . As a matter of fact, this depth can be finite in G but infinite in one ofits subtrees, as shown by the following example. Example 6.3 .
Consider G = rq ! λ ; rq ? λ ; G ′ where G ′ = pq ! λ ; pq ? λ ; rq ! λ ; rq ? λ ⊞ pq ! λ ; pq ? λ ; G ′ Then we have: depth ( G , r ) = depth ( G , p ) = depth ( G , q ) = whereas depth ( G ′ , r ) = ∞ depth ( G ′ , p ) = depth ( G ′ , q ) = since pq ! λ · pq ? λ | {z } n · pq ! λ · pq ? λ · rq ! λ ∈ Tr + ( G ′ ) for all n ≥ and sup { + n | n ≥ } = ∞ . However, the depth of a participant which is a player of G but not the player of itsroot communications decreases in the immediate subtrees of G , as stated in the followinglemma. Lemma 6.4 . (1) If G = pq ! ⊞ i ∈ I λ i ; G i and r ∈ play ( G ) and r , p , then depth ( G , r ) > depth ( G i , r ) for all i ∈ I. (2) If G = pq ? λ ; G ′ and r ∈ play ( G ) and r , q , then depth ( G , r ) > depth ( G ′ , r ) . We can now show that the definition of projection given in Figure 2 is sound.
Lemma 6.5 . If G is bounded, then G ↾ r is a partial function for all r .Proof. It is enough to consider sgts. We redefine the projection ↓ r as the largest relationbetween sgts and processes such that ( G , P ) ∈↓ r implies:i) if r < play ( G ), then P = ;ii) if G = rq ! ⊞ i ∈ I λ i ; G i , then P = L i ∈ I q ! λ i ; P i and ( G i , P i ) ∈↓ r for all i ∈ I ;iii) if G = pr ! λ ; G ′ , then ( G ′ , P ) ∈↓ r ;iv) if G = pr ! ⊞ i ∈ I λ i ; G i and | I | >
1, then P = −→ π ; Σ i ∈ I p ? λ i ; P i and ( G i , −→ π ; p ? λ i ; P i ) ∈↓ r for all i ∈ I ;v) if G = pq ! ⊞ i ∈ I λ i ; G i and r < { p , q } and r ∈ play ( G i ), then ( G i , P ) ∈↓ r for all i ∈ I ;vi) if G = pr ? λ ; G ′ , then P = p ? λ ; P ′ and ( G ′ , P ′ ) ∈↓ r ;vii) if G = pq ? λ ; G ′ and r , q and r ∈ play ( G ′ ), then ( G ′ , P ) ∈↓ r .We define equality E of processes to be the largest symmetric binary relation R on processessuch that ( P , Q ) ∈ R implies:(a) if P = L i ∈ I p ! λ i ; P i , then Q = L i ∈ I p ! λ i ; Q i and ( P i , Q i ) ∈ R for all i ∈ I ;(b) if P = Σ i ∈ I p ? λ i ; P i , then Q = Σ i ∈ I p ? λ i ; Q i and ( P i , Q i ) ∈ R for all i ∈ I .It is then enough to show that the relation R r = { ( P , Q ) | ∃ G . ( G , P ) ∈↓ r and ( G , Q ) ∈↓ r } satisfies Clauses (a) and (b) (with R replaced by R r ), since this will imply R r ⊆ E . Notefirst that ( , ) ∈ R r because ( End , ) ∈↓ r , and that ( , ) ∈ E because Clauses (a) and (b) arevacuously satisfied by the pair ( , ), which must therefore belong to E .The proof is by induction on d = depth ( G , r ). We only consider Clause (b), the proof forClause (a) being similar and simpler. So, assume ( P , Q ) ∈ R r and P = Σ i ∈ I p ? λ i ; P i . Case d = . In this case G = pr ? λ ; G ′ and P = p ? λ ; P ′ and ( G ′ , P ′ ) ∈↓ r . From ( G , Q ) ∈↓ r weget Q = p ? λ ; Q ′ and ( G ′ , Q ′ ) ∈↓ r . Hence Q has the required form and ( P ′ , Q ′ ) ∈ R r . Case d > . By definition of ↓ r , there are five possible subcases. iom ( End , ∅ ) [ End ] ⊢ iom ( G , M ) ⊢ iom ( pq ? λ ; G , h p , λ, q i · M ) ================================== [I n ] ⊢ iom ( G i , M · h p , λ i , q i ) for all i ∈ I if pq ! ⊞ i ∈ I λ i ; G i is cyclic then M = ∅⊢ iom ( pq ! ⊞ i ∈ I λ i ; G i , M ) ============================================================================================== [O ut ] ⊢ iom ( G , M ↾ play ( G ) ) ⊢ iom ( G ′ , M ↾ play ( G ′ ) ) M ≡ M ↾ play ( G ) · M ↾ play ( G ′ ) [P ar ] ⊢ iom ( G k G ′ , M ) Figure 3:
Input / output matching of pgts with respect to queues.(1) Case G = pr ! λ ; G ′ and ( G ′ , P ) ∈↓ r . From ( G , Q ) ∈↓ r we get ( G ′ , Q ) ∈↓ r . Then ( P , Q ) ∈ R r .(2) Case G = pr ! ⊞ i ∈ I λ i ; G i and ( G i , p ? λ i ; P i ) ∈↓ r for all i ∈ I and | I | >
1. From ( G , Q ) ∈↓ r we get Q = −→ π ; Σ i ∈ I p ? λ i ; Q i and ( G i , −→ π ; p ? λ i ; Q i ) ∈↓ r for all i ∈ I . Since ( p ? λ i ; P i , −→ π ; p ? λ i ; Q i ) ∈ R r for all i ∈ I , by induction Clause (b) is satisfied. Thus −→ π = ǫ and ( P i , Q i ) ∈ R r for all i ∈ I .(3) Case G = qr ! ⊞ j ∈ J λ ′ j ; G j with q , p and P = p ? λ ; −→ π ; Σ j ∈ J q ? λ ′ j ; P ′ j and ( G j , p ? λ ; −→ π ; q ? λ ′ j ; P ′ j ) ∈↓ r for all j ∈ J . From ( G , Q ) ∈↓ r we get Q = −→ π ′ ; Σ j ∈ J q ? λ ′ j ; Q ′ j and ( G j , −→ π ′ ; q ? λ ′ j ; Q ′ j ) ∈↓ r forall j ∈ J . Since ( p ? λ ; −→ π ; q ? λ ′ j ; P ′ j , −→ π ′ ; q ? λ ′ j ; Q ′ j ) ∈ R r for all j ∈ J , by induction Clause (b) issatisfied. Thus −→ π ′ = p ? λ ; −→ π and ( −→ π ; q ? λ ′ j ; P ′ j , −→ π ; q ? λ ′ j ; Q ′ j ) ∈ R r for all j ∈ J .(4) Case G = qs ! ⊞ j ∈ J λ ′ j ; G j and r , s and r ∈ play ( G j ) and ( G j , P ) ∈↓ r for j ∈ J . From( G , Q ) ∈↓ r we get ( G j , Q ) ∈↓ r for all j ∈ J . Then ( P , Q ) ∈ R r .(5) Case G = qs ? λ ; G ′ and r ∈ play ( G ′ ). Then ( G ′ , P ) ∈↓ r . From ( G , Q ) ∈↓ r we get ( G ′ , Q ) ∈↓ r .Then ( P , Q ) ∈ R r .To ensure the correspondence between inputs and outputs, in Figure 3 we definethe input / output matching of pgts with respect to queues. The intuition is that every inputshould come with a corresponding message in the queue (Rule [I n ]), ensuring that the inputcan take place. Then, each message in the queue can be exchanged for a correspondingoutput that will prefix the type (Rule [O ut ]): this output will then precede the previouslyinserted input and thus ensure again that the input can take place. In short, input / outputmatching holds if the inputs in the type are matched either by a message in the queue orby a preceding output in the type. We say that an sgt is cyclic if its tree contains itself asproper subtree. So the condition “if the sgt is cyclic then the queue is empty” in Rule [O ut ]ensures that there is no message left in the queue at the beginning of a new cycle and thatall messages put in the queue by cyclic sgts have matching inputs in the same cycle. Rule[P ar ] requires the predicate to hold for all sgts of a parallel composition. In this rule weuse the projection of a queue on a set of participants P , defined as follows: ∅ ↾ P = ∅ ( h p , λ, q i · M ) ↾ P = h p , λ, q i · ( M ↾ P ) if q ∈ PM ↾ P otherwiseNote that if the pgt is projectable, then the queue is equivalent to the concatenation in anyorder of its projections on the players of the sgts which are its components.The double line indicates that the rules are interpreted coinductively [Pie02] (Chapter 21). he condition in Rule [O ut ] guarantees that we get only regular proof derivations, thereforethe judgement ⊢ iom ( G , M ) is decidable.If we derive ⊢ iom ( G , ∅ ) we can ensure that in G k ∅ all inputs are matched by correspondingoutputs and vice versa, see the Progress Theorem (Theorem 6.20). The progress propertyholds also for standard global types [DY11, CDCYP16].The next example illustrates the use of the input / output predicate on a number of sgts(both cyclic and not cyclic) and queues. Example 6.6 . (1)
The sgt qp ? λ ; pq ! λ ′ ; pq ? λ ′ is input / output matching for the queue h q , λ, p i , asshown by the following derivation: ⊢ iom ( End , ∅ ) ⊢ iom ( pq ? λ ′ , h p , λ ′ , q i ) ⊢ iom ( pq ! λ ′ ; pq ? λ ′ , ∅ ) ⊢ iom ( qp ? λ ; pq ! λ ′ ; pq ? λ ′ , h q , λ, p i )(2) Let G = pq ! λ ; pq ! λ ; pq ? λ ; G . Then G is not input / output matching for the empty queue.Indeed, we cannot complete the proof tree for ⊢ iom ( G , ∅ ) , since G is cyclic, so we can-not apply Rule [O ut ] to infer the premise ⊢ iom ( G , h p , λ, q i ) in the following deduction: ⊢ iom ( G , h p , λ, q i ) ⊢ iom ( pq ? λ ; G , h p , λ, q i · h p , λ, q i ) ⊢ iom ( pq ! λ ; pq ? λ ; G , h p , λ, q i ) ⊢ iom ( G , ∅ )(3) Let G ′ = ( pq ! λ ; pq ? λ ; G ′ ⊞ pq ! λ ; pq ? λ ) . Then G ′ is input / output matching for the emptyqueue, as we can see from the infinite (but regular) proof tree that follows: ... ⊢ iom ( G ′ , ∅ ) ⊢ iom ( pq ? λ ; G ′ , h p , λ , q i ) ⊢ iom ( End , ∅ ) ⊢ iom ( pq ? λ , h p , λ , q i ) ⊢ iom ( G ′ , ∅ )(4) Let G = pq ! λ ; pq ! λ ; pq ? λ ; G and G = pr ! λ ; pr ? λ ; G . Then G is not input / output match-ing for the empty queue. Indeed, we cannot complete the proof tree for ⊢ iom ( G , ∅ ) , since G iscyclic, so we cannot apply Rule [O ut ] to infer the premise ⊢ iom ( G , h p , λ, q i ) in the followingdeduction: ⊢ iom ( G , h p , λ, q i ) ⊢ iom ( pq ? λ ; G , h p , λ, q i · h p , λ, q i ) ⊢ iom ( pq ! λ ; pq ? λ ; G , h p , λ, q i ) ⊢ iom ( G , ∅ ) nstead, G is input / output matching for the empty queue: ... ⊢ iom ( G , ∅ ) ⊢ iom ( pr ? λ ; G , h p , λ, r i ) ⊢ iom ( G , ∅ )Projectability, boundedness and input / output matching are the three properties thatsingle out the agts we want to use in our type system. Definition 6.7 (Well-formed Asynchronous Global Types) . We say that the agt
G k M is wellformed if G ↾ p is defined for all p , G is bounded and ⊢ iom ( G , M ) is derivable. Clearly, it is su ffi cient to check that G ↾ p is defined for all p ∈ play ( G ), since for anyother p we have G ↾ p = .6.2. Type System.
We are now ready to present our type system. The unique typing rule for networks isgiven in Figure 4, where we assume the agt to be well formed.We first define a preorder on processes, P ≤ Q , meaning that process P can be used where weexpect process Q . More precisely, P ≤ Q if either P is equal to Q , or we are in one of twosituations: either both P and Q are output processes, sending the same labels to the sameparticipant, and after the send P continues with a process that can be used when we expectthe corresponding one in Q ; or they are both input processes receiving labels from the sameparticipant, and P may receive more labels than Q (and thus have more behaviours) butwhenever it receives the same label as Q it continues with a process that can be used whenwe expect the corresponding one in Q . The rules are interpreted coinductively since theprocesses may have infinite (regular) trees.A network N k M is typed by the agt G k M if for every participant p such that p [[ P ]] ∈ N the process P behaves as specified by the projection of G on p . In Rule [N et ], the condition play ( G ) ⊆ { p i | i ∈ I } ensures that all players of G appear in the network. Moreover itpermits additional participants that do not appear in G , allowing the typing of sessionscontaining p [[ ]] for a fresh p — a property required to guarantee invariance of types understructural congruence of networks. Example 6.8 .
The network of Example 2.4 can be typed by G k ∅ for four possible choices for G : pq ! λ ; qp ! λ ′ ; pq ? λ ; qp ? λ ′ pq ! λ ; qp ! λ ′ ; qp ? λ ′ ; pq ? λ qp ! λ ′ ; pq ! λ ; pq ? λ ; qp ? λ ′ qp ! λ ′ ; pq ! λ ; qp ? λ ′ ; pq ? λ ≤ [ ≤ - ] P i ≤ Q i for all i ∈ I L i ∈ I p ! λ i ; P i ≤ L i ∈ I p ! λ i ; Q i ==================================== [ ≤ - out ] P i ≤ Q i for all i ∈ I Σ i ∈ I ∪ J p ? λ i ; P i ≤ Σ i ∈ I p ? λ i ; Q i ====================================== [ ≤ -I n ] P i ≤ G ↾ p i for all i ∈ I play ( G ) ⊆ { p i | i ∈ I }⊢ Π i ∈ I p i [[ P i ]] k M : G k M [N et ] Figure 4:
Preorder on processes and network typing rule. q ! ⊞ i ∈ I λ i ; G i k M pq ! λ k −−−−→ G k k M · h p , λ k , q i where k ∈ I [E xt -O ut ] pq ? λ ; G k h p , λ, q i · M pq ? λ −−−→ G k M [E xt -I n ] G i k M · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I p < play ( β ) [IC omm -O ut ] pq ! ⊞ i ∈ I λ i ; G i k M β −→ pq ! ⊞ i ∈ I λ i ; G ′ i k M ′ G k M β −→ G ′ k M ′ q < play ( β ) [IC omm -I n ] pq ? λ ; G k h p , λ, q i · M β −→ pq ? λ ; G ′ k h p , λ, q i · M ′ Figure 5:
LTS for simple agts. since each participant only needs to do the output before the input. Notice that this network cannotbe typed with the standard global types of [HYC16] .The network N ′ k M ′ of Example 5.11 can be typed by the agt pq ? λ ; pr ! λ ; pr ? λ k M ′ The following proposition allows us to consider only simple agts in discussing proper-ties of the type system. The proof immediately follows by the definition of projection andthe typing Rule [N et ]. Proposition 6.9 . If ⊢ N k M : Π i ∈ I G i k M , then N ≡ Π i ∈ I N i and ⊢ N i k M ↾ play ( G i ) : G i k M ↾ play ( G i ) for all i ∈ I. Figure 5 gives the LTS for simple agts. It shows that a communication can be performedalso under a choice or an input guard, provided it has a di ff erent player. More precisely, inRule [IC omm -O ut ], the premise guarantees that the communication β of G i is independentfrom the choice. Notice that there are only two cases in which β could be dependent fromthe choice: 1) if β were another communication performed by p , which is excluded by thecondition p < play ( β ), and 2) if β were the matching input for the output performed by thechoice. The latter situation is excluded by the requirement on the form of the queue, sincein this case β would be an input by q of the last message inserted by p in the queue, but thisis not possible since this message is still on the queue after the occurrence of β . Similarly, inRule [IC omm -I n ], the condition q < play ( β ) guarantees that β is not another communicationperformed by the player q of the guarding input. Again, the communication β in the premisemust be able to occur as if it were performed after the guarding communication, thereforeit must use the queue that would result from executing the guarding communication.We say that G k M β −→ G ′ k M ′ is a top transition if it is derived using either Rule[E xt -O ut ] or Rule [E xt -I n ]. We show that top transitions preserve the well-formedness ofsimple agts: Lemma 6.10 . If G k M β −→ G ′ k M ′ is a top transition and G k M is well formed, then G ′ k M ′ is well formed too. roof. If the transition is derived using Rule [E xt -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and for some k ∈ I we have G ′ = G k and M ′ ≡ M · h p , λ k , q i . We show that G k k M · h p , λ k , q i is wellformed. Since G ↾ p is defined for all p , by definition of projection also G k ↾ p is defined forall p . Since G is bounded and G k is a sub-tree of G , also G k is bounded. Finally, ⊢ iom ( G , M )implies ⊢ iom ( G k , M · h p , λ k , q i ) by inversion on Rule [O ut ] of Figure 3.If the transition is derived using Rule [E xt -I n ], then G = pq ? λ ; G ′ and the proof issimilar and simpler.The following lemma detects some transitions of a simple agt from the projections ofits sgt. Lemma 6.11 . (1) If G ↾ p = L i ∈ I q ! λ i ; P i and G k M is well formed, then G k M pq ! λ i −−−→ G i k M · h p , λ i , q i and G i ↾ p = P i for all i ∈ I. (2) If G ↾ q = Σ i ∈ I p ? λ i ; P i and G k M is well formed and M ≡ h p , λ, q i · M ′ for some λ , thenI = { k } and λ = λ k and G k M pq ? λ k −−−−→ G ′ k M ′ and G ′ ↾ q = P k .Proof. (1) The proof is by induction on d = depth ( G , p ). Case d = . By definition of projection (see Figure 2), G ↾ p = L i ∈ I q ! λ i ; P i implies G = pq ! ⊞ i ∈ I λ i ; G i with G i ↾ p = P i for all i ∈ I . Then by Rule [E xt -O ut ] we may conclude G k M pq ! λ i −−−→ G i k M · h p , λ i , q i for all i ∈ I . Case d > . In this case either i) G = rs ! ⊞ j ∈ J λ ′ j ; G j with r , p or ii) G = rs ? λ ; G with s , p .i) There are three subcases.If s = p and | J | =
1, say J = { } , then G = rp ! λ ′ ; G . By definition of projection and byassumption G ↾ p = G ↾ p = L i ∈ I q ! λ i ; P i . By Lemma 6.4(1) depth ( G , p ) > depth ( G , p ). ByLemma 6.10 G k M · h r , λ ′ , p i is well formed. Then by induction G k M · h r , λ ′ , p i pq ! λ i −−−→ G ′ i k M · h r , λ ′ , p i · h p , λ i , q i and G ′ i ↾ p = P i for all i ∈ I . Since M · h r , λ ′ , p i · h p , λ i , q i ≡ M · h p , λ i , q i · h r , λ ′ , p i , by Rule[IC omm -O ut ] we get G k M pq ! λ i −−−→ rp ! λ ′ ; G ′ i k M · h p , λ i , q i for all i ∈ I . By definition ofprojection ( rp ! λ ′ ; G ′ i ) ↾ p = G ′ i ↾ p and so ( rp ! λ ′ ; G ′ i ) ↾ p = P i for all i ∈ I .If s = p and | J | >
1, by definition of projection and the assumption that G ↾ p is achoice of output actions on q we have that G ↾ p = q ! λ ; P with P = −→ π ; Σ j ∈ J r ? λ ′ j ; Q j and G j ↾ p = q ! λ ; −→ π ; r ? λ ′ j ; Q j for all j ∈ J . By Lemma 6.4(1) depth ( G , p ) > depth ( G j , p ) forall j ∈ J . By Lemma 6.10 G j k M · h r , λ ′ j , s i is well formed. Then by induction G j kM · h r , λ ′ j , s i pq ! λ −−−→ G ′ j k M · h r , λ ′ j , s i · h p , λ, q i and G ′ j ↾ p = −→ π ; r ? λ ′ j ; Q j for all j ∈ J . Since M · h r , λ ′ j , s i · h p , λ, q i ≡ M · h p , λ, q i · h r , λ ′ j , s i , by Rule [IC omm -O ut ] we get G k M pq ! λ −−−→ rp ! ⊞ j ∈ J λ ′ j ; G ′ j k M · h p , λ, q i . Lastly ( rp ! ⊞ j ∈ J λ ′ j ; G ′ j ) ↾ p = −→ π ; Σ j ∈ J r ? λ ′ j ; Q j since G ′ j ↾ p = −→ π ; r ? λ ′ j ; Q j . We may then conclude that ( rp ! ⊞ j ∈ J λ ′ j ; G ′ j ) ↾ p = P .If s , p , then by definition of projection G ↾ p = G j ↾ p for all j ∈ J . By Lemma 6.4(1) depth ( G , p ) > depth ( G j , p ) for all j ∈ J . Then by induction G j k M pq ! λ i −−−→ G i , j k M · h p , λ i , q i and G i , j ↾ p = P i for all i ∈ I and all j ∈ J . By Rule [IC omm -O ut ] G k M pq ! λ i −−−→ rs ! ⊞ j ∈ J λ ′ j ; G i , j k M · h p , λ i , q i for all i ∈ I . By definition of projection ( rs ! ⊞ j ∈ J λ ′ j ; G i , j ) ↾ p = G i , j ↾ p = P i for all i ∈ I . i) The proof of this case is similar and simpler than the proof of case i). It uses Lemmas 6.4(2)and 6.10 and Rule [IC omm -I n ], instead of Lemmas 6.4(1) and 6.10 and Rule [IC omm -O ut ].Note that, in order to apply Rule [IC omm -I n ], we need M ≡ h r , λ, s i · M ′ . This derives frominput / output matching of rs ? λ ; G ′ for the queue M using Rule [I n ] of Figure 3.(2) The proof is by induction on d = depth ( G , q ). Case d = . By definition of projection and the hypothesis G ↾ q = Σ i ∈ I p ? λ i ; P i , it must be G = pq ? λ ; G ′ and | I | =
1, say I = { k } , and λ = λ k and G ′ ↾ q = P k . Then by Rule [E xt -I n ] wededuce G k h p , λ k , q i · M ′ pq ? λ k −−−−→ G ′ k M ′ . Case d > . In this case either i) G = rs ! ⊞ j ∈ J λ ′ j ; G j with r , q or ii) G = rs ? λ ; G ′ with s , q .i) There are two subcases, depending on whether s = q or s , q . The most interesting caseis the first one, namely G = rq ! ⊞ j ∈ J λ ′ j ; G j . By definition of projection G ↾ q = −→ π ; Σ j ∈ J r ? λ ′ j ; Q j where G j ↾ q = −→ π ; r ? λ ′ j ; Q j . By assumption G ↾ q = Σ i ∈ I p ? λ i ; P i , thus it must be either −→ π = ǫ or | I | =
1, say I = { k } , and −→ π = p ? λ k ; −→ π ′ .If −→ π = ǫ , we have that r = p and J = I and λ ′ i = λ i and Q i = P i for all i ∈ I . This means that G = pq ! ⊞ i ∈ I λ i ; G i and G i ↾ q = p ? λ i ; P i . Let M i ≡ h p , λ, q i · M ′ i where M ′ i = M ′ · h p , λ i , q i . ByLemma 6.10 G i k M i is well formed for all i ∈ I . By Lemma 6.4(1) depth ( G , q ) > depth ( G i , q )for all i ∈ I . By induction hypothesis, G i k M i pq ? λ −−−→ G ′ i k M ′ i and λ = λ i and G ′ i ↾ q = P i forall i ∈ I . This implies that | I | =
1, say I = { k } . Then G = pq ! λ ; G k and by Rule [IC omm -O ut ]we deduce G k M pq ? λ −−−→ G ′ k M ′ , where G ′ = pq ! λ ; G ′ k . Whence by definition of projection G ↾ q = G k ↾ q = p ? λ k ; P k and G ′ ↾ q = G ′ k ↾ q = P k .If −→ π = p ? λ k ; −→ π ′ , then G ↾ q = p ? λ k ; P k , where P k = −→ π ′ ; Σ j ∈ J r ? λ ′ j ; Q j . Let M j ≡ h p , λ, q i · M ′ j where M ′ j = M ′ · h r , λ ′ j , q i . For all j ∈ J , G j k M j is well formed by Lemma 6.10 and depth ( G , q ) > depth ( G j , q ) by Lemma 6.4(1). By induction hypothesis we get λ = λ k and G j k M j pq ? λ −−−→ G ′ j k M ′ j for all j ∈ J . Let G ′ = rq ! ⊞ j ∈ J λ ′ j ; G ′ j . Then G k M pq ? λ −−−→ G ′ k M ′ byRule [IC omm -O ut ] and G ′ ↾ q = −→ π ′ ; Σ j ∈ J r ? λ ′ j ; Q j = P k .ii) The proof of this case is similar and simpler than the proof of case i). It uses Lemmas 6.4(2)and 6.10 and Rule [IC omm -I n ], instead of Lemmas 6.4(1) and 6.10 and Rule [IC omm -O ut ].Note that, in order to apply Rule [IC omm -I n ], we need M ≡ h r , λ, s i · M ′ . This derives frominput / output matching of rs ? λ ; G ′ for the queue M using Rule [I n ] of Figure 3.We can also detect the projections of an sgt from a transition of the simple agt obtainedby putting the sgt in parallel with a compliant queue. Lemma 6.12 . (1) If G k M pq ! λ −−−→ G ′ k M ′ and G k M is well formed, then M ′ ≡ M · h p , λ, q i and G ↾ p = L i ∈ I q ! λ i ; P i and λ = λ k and G ′ ↾ p = P k for some k ∈ I and G ↾ r ≤ G ′ ↾ r forall r , p . (2) If G k M pq ? λ −−−→ G ′ k M ′ and G k M is well formed, then M ≡ h p , λ, q i · M ′ and G ↾ q = pq ? λ ; G ′ ↾ q and G ↾ r ≤ G ′ ↾ r for all r , q .Proof. (1) By induction on the inference of the transition G k M pq ! λ −−−→ G ′ k M ′ . Base Case.
The applied rule must be Rule [E xt -O ut ], so G = pq ! ⊞ i ∈ I λ i ; G i and λ = λ k and G ′ = G k for some k ∈ I , and pq ! ⊞ i ∈ I λ i ; G i k M pq ! λ k −−−−→ G k k M · h p , λ k , q i y definition of projection G ↾ p = L i ∈ I q ! λ i ; G i ↾ p and G ′ ↾ p = G k ↾ p . Again by definitionof projection, if r < { p , q } or r = q and | I | =
1, we have G ↾ r = G ↾ r and so G ↾ r = G ′ ↾ r . If r = q and | I | >
1, then G ↾ q = −→ π ; Σ i ∈ I p ? λ i ; Q i where G i ↾ q = −→ π ; p ? λ i ; Q i for all i ∈ I and so G ↾ q ≤ G k ↾ q . Inductive Cases.
If the applied rule is [IC omm -O ut ], then G = st ! ⊞ j ∈ J λ ′ j ; G j and G ′ = st ! ⊞ j ∈ J λ ′ j ; G ′ j and G j k M · h s , λ ′ j , t i pq ! λ −−−→ G ′ j k M ′ · h s , λ ′ j , t i j ∈ J p , sst ! ⊞ j ∈ J λ ′ j ; G j k M pq ! λ −−−→ st ! ⊞ j ∈ J λ ′ j ; G ′ j k M ′ By Lemma 6.10 G j k M · h s , λ ′ j , t i is well formed. By induction hypothesis M ′ · h s , λ ′ j , t i ≡M · h s , λ ′ j , t i · h p , λ, q i , which implies M ′ ≡ M · h p , λ, q i . If p , t , by definition of projection G ↾ p = G ↾ p and G j ↾ p = G ↾ p for all j ∈ J . Similarly G ′ ↾ p = G ′ ↾ p and G ′ j ↾ p = G ′ ↾ p for all j ∈ J . By induction hypothesis G ↾ p = L i ∈ I q ! λ i ; P i and λ = λ k and G ′ ↾ p = P k forsome k ∈ I . This implies G ↾ p = L i ∈ I q ! λ i ; P i and G ′ ↾ p = P k .If p = t and | J | = p = t and | J | >
1, then the definition of projection gives G ↾ p = −→ π ; Σ j ∈ J s ? λ ′ j ; Q j and G j ↾ p = −→ π ; s ? λ ′ j ; Q j and G ′ ↾ p = −→ π ′ ; Σ j ∈ J s ? λ ′ j ; Q ′ j and G ′ j ↾ p = −→ π ′ ; s ? λ ′ j ; Q ′ j for all j ∈ J . Byinduction hypothesis −→ π = q ! λ ; −→ π ′ , which implies G ↾ p = q ! λ ; G ′ ↾ p .For r < { p , s , t } by definition of projection G ↾ r = G ↾ r and G j ↾ r = G ↾ r for all j ∈ J .Similarly G ′ ↾ r = G ′ ↾ r and G ′ j ↾ r = G ′ ↾ r for all j ∈ J . By induction hypothesis G ↾ r ≤ G ′ ↾ r , which implies G ↾ r ≤ G ′ ↾ r .For participant s we have G ↾ s = L j ∈ J t ! λ ′ j ; G j ↾ s ≤ L j ∈ J t ! λ ′ j ; G ′ j ↾ s = G ′ ↾ s .For participant t , p if | J | = r < { p , s , t } . If | J | >
1, then wehave G ↾ t = −→ π ; Σ j ∈ J s ? λ ′ j ; R j where G j ↾ t = −→ π ; s ? λ ′ j ; R j and G ′ ↾ t = −→ π ′ ; Σ j ∈ J s ? λ ′ j ; R ′ j where G ′ j ↾ t = −→ π ′ ; s ? λ ′ j ; R ′ j . From G j ↾ t ≤ G ′ j ↾ t for all j ∈ J we get −→ π ′ = −→ π and R j ≤ R ′ j for all j ∈ J .This implies G ↾ t ≤ G ′ ↾ t .If the applied rule is [IC omm -I n ] the proof is similar and simpler.(2) The proof is similar to the proof of (1). The most interesting case is the applicationof Rule [IC omm -O ut ] G j k M · h s , λ ′ j , t i pq ? λ −−−→ G ′ j k M ′ · h s , λ ′ j , t i j ∈ J q , sst ! ⊞ j ∈ J λ ′ j ; G j k M pq ? λ −−−→ st ! ⊞ j ∈ J λ ′ j ; G ′ j k M ′ By Lemma 6.10 G j k M · h s , λ ′ j , t i is well formed. By induction hypothesis M · h s , λ ′ j , t i ≡h p , λ, q i · M ′ · h s , λ ′ j , t i , which implies M ≡ h p , λ, q i · M ′ . If q , t , by definition of projection G ↾ q = G ↾ q and G j ↾ q = G ↾ q for all j ∈ J . Similarly G ′ ↾ q = G ′ ↾ q and G ′ j ↾ q = G ′ ↾ q for all j ∈ J . By induction hypothesis G ↾ q = pq ? λ ; G ′ ↾ q . This implies G ↾ q = pq ? λ ; G ′ ↾ p .If q = t and | J | = q = t and | J | >
1, then the definition of projection gives G ↾ q = −→ π ; Σ j ∈ J s ? λ ′ j ; Q j and G j ↾ q = −→ π ; s ? λ ′ j ; Q j and G ′ ↾ q = −→ π ′ ; Σ j ∈ J s ? λ ′ j ; Q ′ j and G ′ j ↾ q = −→ π ′ ; s ? λ ′ j ; Q ′ j for all j ∈ J . By nduction hypothesis −→ π = p ? λ ; −→ π ′ , which implies G ↾ q = p ? q λ ; G ′ ↾ p .The proof of G ↾ r ≤ G ′ ↾ r for all r , q is as in case (1).The previous lemma will be used to show that transitions of well-formed simple agtspreserve projectability of their sgt components. The LTS preserves well-formedness if alsoinput / output matching is maintained. Lemma 6.13 . If ⊢ iom ( G , M ) and G k M β −→ G ′ k M ′ , then ⊢ iom ( G ′ , M ′ ) .Proof. By induction on the inference of the transition G k M β −→ G ′ k M ′ of Figure 5. Base Cases.
Immediate from Lemma 6.10.
Inductive Cases.
Let G k M β −→ G ′ k M ′ with Rule [IC omm -O ut ]. Then we get G = pq ! ⊞ i ∈ I λ i ; G i and G ′ = pq ! ⊞ i ∈ I λ i ; G ′ i and G i k M · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I . From Rule [O ut ] of Figure 3, we get ⊢ iom ( G i , M · h p , λ i , q i ) for all i ∈ I . By inductionhypotheses for all i ∈ I we can derive ⊢ iom ( G ′ i , M · h p , λ i , q i ). Therefore using Rule [O ut ]we conclude ⊢ iom ( G ′ , M ′ ).Similarly for Rule [IC omm -I n ].We are now able to show that transitions preserve well-formedness of simple agts. Lemma 6.14 . If G k M is a well formed simple agt and G k M β −→ G ′ k M ′ , then G ′ k M ′ is awell formed simple agt too.Proof. Let β = pq ! λ . By Lemma 6.12(1) we have that G ′ ↾ r is defined for all r ∈ play ( G ).Similarly for β = pq ? λ , using Lemma 6.12(2). The proof that depth ( G ′′ , r ) is finite for all r and G ′′ sub-tree of G ′ is easy by induction on the transition rules of Figure 5.Finally, from Lemma 6.13 we have that G ′ is input / output matching for the queue M ′ .The two clauses of the projection of an output choice on the receiver, see Figure 2, areneeded for the LTS to preserve projectability of well-formed simple agts, as the followingexample shows. Example 6.15 .
Let G = pq ! λ ; pq ? λ ; G ′ , where G ′ = qr ! λ ; qr ? λ ; pq ? λ ⊞ qr ! λ ; qr ? λ ; pq ? λ .The simple agt G k h p , λ, q i is well formed. Assume we modify the definition of projection ofan output choice on the receiver by removing its first clause and the restriction of the second to | J | > . Then G ↾ q is defined since ( pq ? λ ; G ′ ) ↾ q = p ? λ ; ( r ! λ ; p ? λ ⊕ r ! λ ; p ? λ ) has the requiredshape. Applying Rule [IC omm -O ut ] we get G k h p , λ, q i pq ? λ −−−→ pq ! λ ; G ′ k ∅ . The projection ( pq ! λ ; G ′ ) ↾ q would not be defined since G ′ ↾ q = r ! λ ; p ? λ ⊕ r ! λ ; p ? λ does not have the requiredshape. By virtue of Lemma 6.14, we will henceforth only consider well-formed simple agts .We end this section with the expected results of Subject Reduction, Session Fidelity [HYC08,HYC16] and Progress [DY11, CDCYP16], which rely as usual on Inversion and CanonicalForm lemmas.
Lemma 6.16 (Inversion) . If ⊢ N k M : G k M , then P ≤ G ↾ p for all p [[ P ]] ∈ N . Lemma 6.17 (Canonical Form) . If ⊢ N k M : G k M and p ∈ play ( G ) , then p [[ P ]] ∈ N andP ≤ G ↾ p . heorem 6.18 (Subject Reduction) . If ⊢ N k M : G k M and N k M β −→ N ′ k M ′ , then G k M β −→ G ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ .Proof. Let β = pq ! λ . By Rule [S end ] of Figure 1, p [[ L i ∈ I q ! λ i ; P i ]] ∈ N and p [[ P k ]] ∈ N ′ and M ′ = M · h p , λ k , q i and λ = λ k for some k ∈ I . Moreover r [[ R ]] ∈ N i ff r [[ R ]] ∈ N ′ for all r , p .From Lemma 6.16 we get(1) L i ∈ I q ! λ i ; P i ≤ G ↾ p , which implies G ↾ p = L i ∈ I q ! λ i ; P ′ i with P i ≤ P ′ i for all i ∈ I fromRule [ ≤ - out ] of Figure 4 , and(2) R ≤ G ↾ r for all r , p such that r [[ R ]] ∈ N .By Lemma 6.11(1) G k M pq λ k −−−→ G k k M·h p , λ k , q i and G k ↾ p = P ′ k , which implies P k ≤ G k ↾ p .By Lemma 6.12(1) G ↾ r ≤ G k ↾ r for all r , p . By transitivity of ≤ we have R ≤ G k ↾ r for all r , p . We can then choose G ′ = G k .Let β = pq ? λ . By Rule [R cv ] of Figure 1, q [[ Σ j ∈ J p ? λ j ; Q j ]] ∈ N and q [[ Q k ]] ∈ N ′ and M = h p , λ k , q i · M ′ and λ = λ k for some k ∈ J . Moreover r [[ R ]] ∈ N i ff r [[ R ]] ∈ N ′ for all r , q .From Lemma 6.16 we get(1) Σ j ∈ J p ? λ j ; Q j ≤ G ↾ q , which implies G ↾ q = Σ j ∈ I p ? λ j ; Q ′ j with I ⊆ J and Q i ≤ Q ′ i for all i ∈ I from Rule [ ≤ - in ] of Figure 4, and(2) R ≤ G ↾ r for all r , q such that r [[ R ]] ∈ N .By Lemma 6.11(2), since M = h p , λ k , q i · M ′ , we get G k M pq ? λ k −−−−→ G k k M ′ and I = { k } and G k ↾ q = Q ′ k , which implies Q k ≤ G k ↾ p . By Lemma 6.12(2) G ↾ r ≤ G k ↾ r for all r , q . Bytransitivity of ≤ we have R ≤ G k ↾ r for all r , q . We can then choose G ′ = G k . Theorem 6.19 (Session Fidelity) . If ⊢ N k M : G k M and G k M β −→ G ′ k M ′ , then N k M β −→ N ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ .Proof. Let β = pq ! λ . By Lemma 6.12(1) M ′ ≡ M · h p , λ, q i , G ↾ p = L i ∈ I p ! λ i ; P i , λ = λ k , G ′ ↾ p = P k for some k ∈ I and G ↾ r ≤ G ′ ↾ r for all r , p . From Lemma 6.17 we get N ≡ p [[ P ]] k N ′′ and(1) P = L i ∈ I q ! λ i ; P ′ i with P ′ i ≤ P i for all i ∈ I , from Rule [ ≤ - out ] of Figure 4, and(2) R ≤ G ↾ r for all r [[ R ]] ∈ N ′′ .We can then choose N ′ = p [[ P ′ k ]] k N ′′ .Let β = pq ? λ . By Lemma 6.12(2) M ≡ h p , λ, q i · M ′ , G ↾ q = p ? λ ; P , G ′ ↾ q = P and G ↾ r ≤ G ′ ↾ r for all r , q . From Lemma 6.17 we get N ≡ q [[ Q ]] k N ′′ and(1) Q = p ? λ ; P ′ + Q ′ with P ′ ≤ P , from Rule [ ≤ - in ] of Figure 4, and(2) R ≤ G ↾ r for all r [[ R ]] ∈ N ′′ .We can then choose N ′ = q [[ P ′ ]] k N ′′ .We are now able to prove that in a typable network, every participant whose processis not terminated may eventually perform an action, and every message that is stored inthe queue is eventually received. This property is generally referred to as progress. Theorem 6.20 (Progress) . If ⊢ N k M : G k M , then the network N k M satisfies progress,namely, if either N k M = N k M or N k M τ −→ N k M , then: (1) p [[ P ]] ∈ N and P , imply N k M τ ′ · β −−−→ N ′ k M ′ with play ( β ) = { p } ; M ≡ h p , λ, q i · M implies N k M τ ′ · pq ? λ −−−−−−→ N ′ k M ′ .Proof. (1) If P is an output process, then it can always move. If P is an input process,then by Subject Reduction (Theorem 6.18) ⊢ N k M : G k M for some G . We prove byinduction on d = depth ( G , p ) that G k M τ ′ · β −−−→ G ′ k M ′ with play ( β ) = { p } . This will imply N k M τ ′ · β −−−→ N ′ k M ′ by Session Fidelity (Theorem 6.19). Case d =
1. Here G = qp ? λ ; G ′ . Since G k M is well formed, this implies M ≡ h q , λ, q i · M ′ by Rule [I n ] of Figure 3. Then G k M qp ? λ −−−→ G ′ k M ′ by Rule [E xt -I n ] of Figure 5. Case d >
1. Here we have either G = rs ! ⊞ i ∈ I λ i ; G i with r , p or G = rs ? λ ; G ′′′ with s , p . By Lemma 6.4 this implies depth ( G i , p ) < d for all i ∈ I in the first case, and depth ( G ′′′ , p ) < d in the second case. Hence in both cases, by applying Rule [E xt -O ut ] orRule [E xt -I n ] of Figure 5, we get G k M β ′ −→ G ′′ k M ′′ with depth ( G ′′ , p ) < d . By induction G ′′ k M ′′ τ ′ · β −−−→ G ′ k M ′ with play ( β ) = { p } . Therefore G k M β ′ · τ ′ · β −−−−−→ G ′ k M ′ is the requiredtransition sequence.(2) We define the input depth of the input pq ? λ in G , notation idepth ( G , pq ? λ ), by induc-tion on G : idepth ( rs ! ⊞ i ∈ I λ i ; G i , pq ? λ ) = + max i ∈ I idepth ( G i , pq ? λ ) idepth ( rs ? λ ′ ; G ′ , pq ? λ ) = pq ? λ = rs ? λ ′ + idepth ( G ′ , pq ? λ ) otherwiseNotice that ⊢ iom ( G , h p , λ, q i · M ) implies that idepth ( G , pq ? λ ) is finite, since proof deriva-tions are regular and only Rule [I n ] of Figure 3 adds messages to the queue.By Subject Reduction (Theorem 6.18) ⊢ N k M : G k M for some G . We prove by inductionon id = idepth ( G , p ) that G k M τ ′ · pq ? λ −−−−−−→ G ′ k M ′ . This will imply N k M τ ′ · pq ? λ −−−−−−→ N ′ k M ′ by Session Fidelity (Theorem 6.19). Case id =
1. Here G = pq ? λ ; G ′ , which implies G k M pq ? λ −−−→ G ′ k M by Rule [E xt -I n ] ofFigure 5. Case id >
1. By applying Rule [E xt -O ut ] or Rule [E xt -I n ] of Figure 5 we get G k M β −→ G ′′ k M ′′ and idepth ( G ′′ , pq ? λ ) < id . By induction G ′′ k M ′′ τ ′ · pq ? λ −−−−−−→ G ′ k M ′ . We conclude G k M β · τ ′ · pq ? λ −−−−−−−→ G ′ k M ′ .The iteration of Theorem 6.20(2) ensures that each message in the queue is eventuallyread. The input / output matching condition is needed to avoid orphan messages, i.e. mes-sages which remain forever in the queue, as shown by Example 6.6(4). The boundednesscondition guarantees that each player of an sgt G occurs as player in every path startingfrom the root of G . This fails for player r in the sgt of Example 6.3.The proof of Theorem 6.20 shows that the desired transition sequences use only Rules[E xt -O ut ] and [E xt -I n ] and the output choice is arbitrary. Moreover the lengths of thesetransition sequences are bounded by depth ( G , p ) and idepth ( G , pq ? λ ), respectively. . E vent S tructure S emantics of A synchronous G lobal T ypes We define now the event structure associated with a simple agt. The events of this ES willbe equivalence classes of pairs whose elements are particular traces. The first elements areo-traces corresponding to the messages in the queue, as in n-events (Definition 5.3(1)). Thesecond elements are traces in sgt trees.For traces τ , as given in Definition 2.3, we use the following notational conventions:– We denote by τ [ i ] the i -th element of τ , i > i ≤ j , we define τ [ i ... j ] = τ [ i ] · · · τ [ j ] to be the subtrace of τ consisting of the ( j − i + i -th one and ending with the j -th one. If i > j , we define τ [ i ... j ] to be the empty trace ǫ .If not otherwise stated we assume that τ has n elements, so τ = τ [1 ... n ].In the traces we want to require that every input matches a corresponding output. Thisis checked using the multiplicity of pq † in τ , defined by induction as follows: m ( pq † , ǫ ) = m ( pq † , β · τ ) = m ( pq † , τ ) + β = pq † λ m ( pq † , τ ) otherwisewhere † ∈ { ! , ? } (as in Definition 5.4).An input of q from p matches a preceding output from p to q in a trace if it has the samelabel λ and the number of inputs from p to q in the subtrace before the given input is equalto the number of outputs from p to q in the subtrace before the given output.This is formalised using the above multiplicity and the positions of communications intraces. Definition 7.1 (Matching) . The input τ [ j ] = pq ? λ matches the output τ [ i ] = pq ! λ in τ , dubbedi ∝ τ j, if i < j and m ( pq ! , τ [1 ... i − = m ( pq ? , τ [1 ... j − . For example, if τ = pq ! λ ; pq ! λ ; pq ! λ ; pq ? λ ; pq ? λ , then 1 ∝ τ ∝ τ
5, while no inputmatches the output at position 3.As mentioned earlier, o-traces will be used to represent queues and general traces arepaths in sgt trees. We want to define an equivalence relation on general traces, which allowsus to exchange the order of adjacent communications when this order is not essential. This isthe case if the communications have di ff erent players and in addition they are not matchingaccording to Definition 7.1. However, the matching relation must also take into accountthe fact that some outputs are already on the queue. So we will consider well-formednesswith respect to a prefixing o-trace. We proceed as follows:– we start with well-formed traces (Definition 7.2);– we define the swapping relation ⊲ ω which allows two communications to be interchangedin a trace τ , when these communications are independent in the trace ω · τ (Definition 7.3);– then we show that ⊲ ω preserves ω -well-formedness (Lemma 7.4);– finally we define the equivalence ≈ ω on ω -well-formed traces (Definition 7.5).In a well-formed trace each input must have a corresponding output. We also need anotion of well-formedness for a su ffi x of a trace w.r.t. the whole trace. Definition 7.2 (Well-formedness) . (1) A trace τ is well formed if every input matches an output in τ . (2) A trace τ is τ ′ - well formed if τ ′ · τ is well formed. s an example, the trace τ = pq ! λ · pq ! λ ′ · pq ? λ ′ is not well formed since 2 τ
3. On theother hand, τ is pq ! λ ′ -well formed, since pq ! λ ′ · τ is well formed given that 1 ∝ pq ! λ ′ · τ Definition 7.3 (Swapping) . Let τ be ω -well formed. We say that τ ω -swaps to τ ′ , notation τ ⊲ ω τ ′ , if τ = τ [1 ... i − · β · β ′ · τ ′′ τ ′ = τ [1 ... i − · β ′ · β · τ ′′ and play ( β ) ∩ play ( β ′ ) = ∅ and ¬ ( i + | ω | ∝ ω · τ i + + | ω | )For instance, if ω = pq ! λ and τ = pq ! λ · pq ? λ , then τ ω -swaps to τ ′ = pq ? λ · pq ! λ becausethe input in τ matches the output in ω . Lemma 7.4 . If τ is ω -well formed and τ ⊲ ω τ ′ , then τ ′ is ω -well formed too.Proof. Let τ = τ [1 ... i − · β · β ′ · τ and τ ′ = τ [1 ... i − · β ′ · β · τ . We want to prove that ω · τ ′ is well formed. To this end, we will show that if β or β ′ is an input, then it matches anoutput that occurs in the prefix ( ω · τ )[1 ... i − + | ω | ] of ω · τ ′ . Note that it must be β , β ′ ,since by hypothesis play ( β ) ∩ play ( β ′ ) = ∅ .Suppose β ′ is an input. Since τ is ω -well formed, β ′ matches an output in ( ω · τ )[1 ... i − + | ω | ] · β . This output cannot be β , since by hypothesis ¬ ( i + | ω | ∝ ω · τ i + + | ω | ). Hence β ′ matches an output which occurs in the prefix ( ω · τ )[1 ... i − + | ω | ] of ω · τ ′ .Suppose now β = pq ? λ and m ( pq ? , ( ω · τ )[1 ... i − + | ω | ]) = m . Since τ is ω -well formed, β matches an output ( ω · τ )[ j ] = pq ! λ in the prefix ( ω · τ )[1 ... i − + | ω | ] of ω · τ . Then 1 ≤ j < i + | ω | and m ( pq ! , ( ω · τ )[1 ... j − + | ω | ]) = m . Since β , β ′ , also m ( pq ? , ( ω · τ ′ )[1 ... i + | ω | ]) = m .Then β matches ( ω · τ )[ j ] also in ω · τ ′ .From the previous lemma and the observation that if τ is ω -well formed and τ ′ isobtained by swapping the i -th and ( i + τ , then play ( τ ′ [ i ]) ∩ play ( τ ′ [ i + = ∅ and ¬ ( i + | ω | ∝ ω · τ ′ i + + | ω | ), we deduce that the swapping relation is symmetric. Thisallows us to define ≈ ω as the equivalence relation induced by the swapping relation. Definition 7.5 (Equivalence ≈ ω on ω -well-formed traces) . The equivalence ≈ ω on ω -well-formed traces is the reflexive and transitive closure of ⊲ ω . Observe that for o-traces all the equivalences ≈ ω collapse to ≈ ǫ and ≈ ǫ ⊂ (cid:27) , where (cid:27) is the o-trace equivalence given in Definition 5.2. Indeed, it should be clear that ≈ ǫ ⊆ (cid:27) .To show ≈ ǫ , (cid:27) , consider ω = pq ! λ · pr ! λ ′ and ω ′ = pr ! λ ′ · pq ! λ . Then ω (cid:27) ω ′ but ω ǫ ω ′ .This agrees with the fact that o-traces represent messages in queues, while general tracesrepresent future communication actions.Another constraint that we want to impose on traces in order to build events is thateach communication must be a cause of at least one of those that follow it. This happenswhen:– either the two communications have the same player, in which case we say that the firstcommunication is required in the trace (Definition 7.6);– or the first communication is an output and the second is the matching input. e call pointedness the property of a trace in which each communication, except the lastone, satisfies one of the two conditions above. Like well-formedness, also pointedness isparameterised on traces. We first define required communications. Definition 7.6 (Required communication) . We say that τ [ i ] is required in τ , notation req ( i , τ ) ,if play ( τ [ i ]) ⊆ play ( τ [( i + ... n ]) , where n = | τ | . Note that by definition the last element τ [ n ] is not required in τ . Definition 7.7 (Pointedness) . The trace τ is τ ′ -pointed if τ is τ ′ -well formed and for all i, ≤ i < n, one of the following holds: (1) either req ( i , τ )(2) or i + | τ ′ | ∝ τ ′ · τ j + | τ ′ | for some j > i. Observe that the two conditions of the above definition are reminiscent of the twokinds of causality - local flow and cross-flow - discussed for network events in Section 5(Definition 5.6). Indeed, Condition (1) holds if τ [ i ] is a local cause of some τ [ j ], j > i , whileCondition (2) holds if τ [ i ] is a cross-cause of some τ [ j ], j > i .Note also that the conditions of Definition 7.7 must be satisfied only by every τ [ i ] with i < n , thus they hold vacuously for any single communication and for the empty trace.If τ = τ · β · β ′ is τ ′ -pointed, then either play ( β ) = play ( β ′ ) or β ′ matches β in τ ′ · τ , i.e., | τ | + + | τ ′ | ∝ τ ′ · τ | τ | + + | τ ′ | . Also, if a trace τ is τ ′ -pointed for some τ ′ , we know thateach communication in τ must be executed before the last one. Example 7.8 .
Let ω = pq ! λ · rq ! λ and τ = pq ! λ · pq ? λ · rq ? λ . The trace τ is not ω -pointed, sincethe output pq ! λ in τ is not matched by any input in ω · τ (the input pq ? λ in τ matches the output pq ! λ in ω ) and it is not required in τ because its player p is neither a player of pq ? λ nor a player of rq ? λ . So the condition of Definition 7.7 is not satisfied for the output pq ! λ in τ . Instead the trace τ ′ = pq ? λ · rq ? λ is ω -pointed, as well as the trace τ ′′ = rq ? λ · pq ? λ . Pointedness is preserved by su ffi xing. Lemma 7.9 . If τ is τ ′ -pointed and τ = τ · τ , then τ is τ ′ · τ -pointed.Proof. Immediate, since ( τ ′ · τ ) · τ = τ ′ · ( τ · τ ) and τ is a su ffi x of τ and therefore itselements are a subset of those of τ .Note on the other hand that if τ is τ ′ -pointed and τ ′ = τ ′ · τ ′ , then it is not true that τ ′ · τ is τ ′ -pointed, because in this case the set of elements of τ ′ · τ is a superset of that of τ . Forinstance, if τ ′ = ǫ , τ ′ = pq ! λ and τ = rs ! λ ′ · rs ? λ ′ , then τ ′ · τ is not τ ′ -pointed.A useful property of ω -pointedness is that it is preserved by the equivalence ≈ ω , whichdoes not change the rightmost communication in ω -pointed traces. We use last ( τ ) to denotethe last communication of τ . Lemma 7.10 .
Let τ be ω -pointed and τ ≈ ω τ ′ . Then τ ′ is ω -pointed and last ( τ ′ ) = last ( τ ) .Proof. Let τ ≈ ω τ ′ . By Definition 7.5 τ ′ is obtained from τ by m swaps of adjacent commu-nications. The proof is by induction on the number m of swaps. Case m =
0. The result is obvious.
Case m >
0. In this case there is τ obtained from τ by m − β, β ′ , τ such that = τ [1 ... i − · β · β ′ · τ ≈ ω τ [1 ... i − · β ′ · β · τ = τ ′ and play ( β ) ∩ play ( β ′ ) = ∅ and ¬ ( i + | ω | ∝ ω · τ i + + | ω | )By induction hypothesis τ is ω -pointed and last ( τ ) = last ( τ ).To show that τ ′ is ω -pointed, observe that play ( β ) ∩ play ( β ′ ) = ∅ implies: play ( β ) ⊆ play ( β ′ ) ∪ play ( τ ) ⇔ play ( β ) ⊆ play ( τ ) play ( β ′ ) ⊆ play ( τ ) ⇔ play ( β ′ ) ⊆ play ( β ) ∪ play ( τ )From this we deduce req ( i , τ ) ⇐⇒ req ( i , τ ′ ) and req ( i + , τ ) ⇐⇒ req ( i + , τ ′ ), so ifboth τ [ i ] and τ [ i +
1] are required in τ we are done.Otherwise, suppose that i + | ω | ∝ ω · τ j + | ω | where either req ( j , τ ) or j = n . If req ( j , τ ) thenalso req ( j , τ ′ ), as we just saw. Now, j cannot be i + ¬ ( i + | ω | ∝ ω · τ i + + | ω | ).This implies i + + | ω | ∝ ω · τ ′ j + | ω | . Similarly we can show that i + + | ω | ∝ ω · τ j + | ω | implies i + | ω | ∝ ω · τ ′ j + | ω | . Therefore τ ′ is ω -pointed.To show that last ( τ ) = last ( τ ′ ), assume ad absurdum that τ = ǫ . Then τ [1 ... i − · β · β ′ is ω -pointed and thus, as observed after Definition 7.7, we have either play ( β ) ∩ play ( β ′ ) , ∅ or i + | ω | ∝ ω · τ i + + | ω | . In both cases β and β ′ cannot be swapped. So it must be τ , ǫ .We now relate simple asynchronous global types with pairs of o-traces and traces. Lemma 7.11 . If G k M is a simple agt and ω = otr ( M ) and τ ∈ Tr + ( G ) , then τ is ω -well formed.Proof. We prove by induction on τ that ⊢ iom ( G , M ) implies that otr ( M ) · τ is well formed. Case τ = β . If β is an output the result is obvious. If β = pq ? λ , by Rule [I n ] of Figure 3, weget M ≡ h p , λ, q i · M ′ . Therefore ω = pq ! λ · ω ′ and ω · β is well formed. Case τ = β · τ ′ with τ ′ ∈ Tr + ( G ′ ) . If β = pq ! λ , then G = pq ! ⊞ i ∈ I λ i ; G i and λ = λ k and G ′ = G k for some k ∈ I . From ⊢ iom ( G , M ) and Rule [O ut ] of Figure 3, we get ⊢ iom ( G ′ , M·h p , λ, q i ). Byinduction hypothesis on τ ′ , otr ( M · h p , λ, q i ) · τ ′ is well formed. So since otr ( M · h p , λ, q i ) = ω · pq ! λ we get that ω · τ is well formed.If β = pq ? λ , then G = pq ? λ ; G ′ . From ⊢ iom ( G , M ) and Rule [I n ] of Figure 3, we get M ≡ h p , λ, q i · M ′ and ⊢ iom ( G ′ , M ′ ). Let ω ′ = otr ( M ′ ). Then ω (cid:27) pq ! λ · ω ′ . By inductionhypothesis on τ ′ the trace ω ′ · τ ′ is well formed. We want now to show that also the trace τ ′′ = ω · τ = pq ! λ · ω ′ · pq ? λ · τ ′ is well formed, namely that in τ ′′ every input matches anoutput. Note that the first input in τ ′′ is τ ′′ [ | ω | + = pq ? λ . This input matches the output τ ′′ [1] = pq ! λ . For inputs τ ′′ [ i ] with i > | ω | +
1, we know that τ ′′ [ i ] = ( ω ′ · τ ′ )[ i − ω ′ · τ ′ )[ i −
2] matches some output ( ω ′ · τ ′ )[ j ] in ω ′ · τ ′ . Then τ ′′ [ i ] matches τ ′′ [ j +
1] if j ≤ | ω ′ | and τ ′′ [ j +
2] otherwise. This proves that ω · τ is well formed.We have now enough machinery to define events of simple agts, which are equivalenceclasses of pairs whose first elements are o-traces ω (representing queues) and whose secondelements are traces τ (representing paths in the sgt components of the agts). The traces ω and τ are considered respectively modulo (cid:27) and modulo ≈ ω . The trace τ is ω -well formed,reflecting the input / output matching of sgts with respect to queues. The communicationrepresented by an event is the last communication of τ . Definition 7.12 (Global events) . (1) The equivalence ∼ on pairs ( ω, τ ) , where τ , ǫ is ω -pointed,is the least equivalence such that ( ω, τ ) ∼ ( ω ′ , τ ′ ) if ω (cid:27) ω ′ and τ ≈ ω τ ′ (2) A g-event δ = [ ω, τ ] ∼ is the equivalence class of the pair ( ω, τ ) . The communication of δ ,notation i / o ( δ ) , is defined to be last ( τ ) . We denote by GE the set of g-events. Notice that the function i / o can be applied both to an n-event (Definition 5.3(2)) and toa g-event (Definition 7.12(2)). In all cases the result is a communication.Given an o-trace ω and an arbitrary trace τ , we want to build a g-event [ ω, τ ′ ] ∼ (Defi-nition 7.14). To this aim we scan τ from right to left and remove all and only the communi-cations τ [ i ] which violate the pointedness property. Definition 7.13 (Trace filtering) . The filtering of τ · τ ′ by ω with cursor at τ , denoted by τ ⌈ ω τ ′ ,is defined by induction on τ as follows: ǫ ⌈ ω τ ′ = τ ′ ( τ ′′ · β ) ⌈ ω τ ′ = τ ′′ ⌈ ω ( β · τ ′ ) if β · τ ′ is ( ω · τ ′′ ) -pointed τ ′′ ⌈ ω τ ′ otherwise For example pq ? λ · qp ? λ ⌈ pq ! λ ǫ = pq ? λ ⌈ pq ! λ ǫ = ǫ ⌈ pq ! λ pq ? λ = pq ? λ . The resulting trace canalso be empty, as in qp ? λ ⌈ pq ! λ ǫ = ǫ ⌈ pq ! λ ǫ = ǫ . It is easy to verify that τ ⌈ ω τ ′ is a subtrace of τ · τ ′ , and that if τ is ω -pointed, then τ ⌈ ω ǫ = τ . Definition 7.14 (G-event of a pair) . Let τ , ǫ be ω -well formed. The g-event generated by ω and τ , notation ev ( ω, τ ) , is defined to be ev ( ω, τ ) = [ ω, τ ⌈ ω ǫ ] ∼ . Hence the trace of the event ev ( ω, τ ) is the filtering of τ by ω with cursor at the end of τ .This definition is sound since ω (cid:27) ω ′ implies τ ⌈ ω τ ′ = τ ⌈ ω ′ τ ′ . Moreover ev ( ω, τ ) enjoys auseful property, as shown by the following lemma. Lemma 7.15 . If ev ( ω, τ ) is defined, then τ ⌈ ω ǫ , ǫ and i / o ( ev ( ω, τ )) = last ( τ ⌈ ω ǫ ) = last ( τ ) .Proof. Let τ , ǫ be ω -well formed and τ = τ ′ · β . Then β is ( ω · τ ′ )-well formed by Def-inition 7.2. This implies that β is ( ω · τ ′ )-pointed by Definition 7.7, and thus τ ⌈ ω ǫ = ( τ ′ · β ) ⌈ ω ǫ = τ ′ ⌈ ω β . This gives i / o ( ev ( ω, τ )) = last ( τ ⌈ ω ǫ ) = last ( τ ).Since the o-traces in the g-events of a simple agt correspond to the message queue, wedefine the causality and conflict relations only between g-events with the same o-traces.Causality is then simply prefixing of traces, while conflict is induced by the conflict relationon the p-events obtained by projecting the traces on participants (Definition 5.4(1)). Definition 7.16 (Causality and conflict relations on g-events) . The causality relation ≤ andthe conflict relation on the set of g-events GE are defined by: (1) [ ω, τ ] ∼ ≤ [ ω, τ ′ ] ∼ if τ ′ ≈ ω τ · τ for some τ ; (2) [ ω, τ ] ∼ ω, τ ′ ] ∼ if τ @ p τ ′ @ p for some p . Concerning Clause (1), note that the relation ≤ is able to express cross-causality as well aslocal causality, thanks to the hypothesis of ω -well formedness of τ in any g-event [ ω, τ ] ∼ .Indeed, this hypothesis implies that, whenever τ ends by an input pq ? λ , then the matchedoutput pq ! λ must appear either in ω , in which case the output has already occurred, orat some position i in τ . In the latter case, the g-event ev ( ω, τ [1 ... i ]), which represents theoutput pq ! λ , is such that ev ( ω, τ [1 ... i ]) ≤ [ ω, τ ] ∼ . In fact, the statement ev ( ω, τ [1 ... i ]) ≤ [ ω, τ ] ∼ holds in general, also when τ [ i ] and τ [ n ] are not a pair of communications in thematching relation, as proved in Lemma 7.17.As regards Clause (2), note that if τ ≈ ω τ ′ , then τ @ p = τ ′ @ p for all p , because ≈ ω doesnot swap communications with the same player. Hence, conflict is well defined, since itdoes not depend on the trace chosen in the equivalence class. The condition τ @ p τ ′ @ p tates that participant p does the same actions in both traces up to some point, after whichit performs two di ff erent actions in τ and τ ′ . Lemma 7.17 .
Let [ ω, τ ] ∼ be a g-event. Then ev ( ω, τ [1 ... i ]) ≤ [ ω, τ ] ∼ .Proof. If [ ω, τ ] ∼ is a g-event, then τ [1 ... i ] is ω -well formed by Definition 7.12, which impliesthat ev ( ω, τ [1 ... i ]) is defined. Let τ [ i ] = β , then τ [1 ... i ] ⌈ ω ǫ = τ ′ · β for some τ ′ by Lemma 7.15.We want to show that τ ≈ ω τ ′ · β · τ for some τ . By Definition 7.13 τ ′ is a subtrace of τ [1 ... i − τ ′ = τ · . . . · τ k and τ [1 ... i − = τ · β · . . . · β k · τ k , where k ≥
1. If k = τ · β · . . . · β k · τ k · β by ω with cursor at τ · β · . . . · β k erases β k . I. e. ( τ · β · . . . · τ k − · β k ) ⌈ ω τ k · β = ( τ · β · . . . · τ k − ) ⌈ ω τ k · β . Thismeans that β k · τ k · β is not ( ω · τ · β · . . . · τ k − )-pointed by Definition 7.13. Instead τ k · β is( ω · τ · β · . . . · τ k − · β k )-pointed. So the failure of the pointedness property is due to β k . ByDefinitions 7.7 and 7.3 τ · β · . . . · β k · τ k · β ω -swaps (in several steps) to τ · β · . . . · τ k · β · β k .By iterating this argument for all β j , ≤ j < k , we obtain τ [1 ... i ] ≈ ω τ · . . . · τ k · β · β · . . . · β k .We conclude τ = τ [1 ... i ] · τ [ i + ... n ] ≈ ω τ ′ · β · β · . . . · β k · τ [ i + ... n ].We get the events of a simple agt G k M by applying the function ev to the pairs madeof the o-trace of the queue M and a trace in the tree of G . Lemma 7.11 and Definition 7.14ensure that ev is defined. We then build the ES associated with a simple agt G k M asfollows. Definition 7.18 (Event Structure of a Global Type) . The event structure of the simple agt G k M is the triple S G ( G k M ) = ( GE ( G k M ) , ≤ G kM , G kM ) where: (1) GE ( G k M ) = { ev ( ω, τ ) | ω = otr ( M ) & τ ∈ Tr + ( G ) } ; (2) ≤ G kM is the restriction of ≤ to the set GE S ( G k M ) ; (3) G kM is the restriction of to the set GE S ( G k M ) . Example 7.19 . (1)
Let G = pq ! λ ; qp ! λ ′ ; qp ? λ ′ ; pq ? λ . Then GE ( G k ∅ ) = { δ , . . . , δ } where δ = [ ǫ, pq ! λ ] ∼ , δ = [ ǫ, qp ! λ ′ ] ∼ , δ = [ ǫ, pq ! λ · qp ! λ ′ · qp ? λ ′ ] ∼ , δ = [ ǫ, pq ! λ · qp ! λ ′ · pq ? λ ] ∼ . (2) Let G = pq ! λ ; pr ! λ ′ ; pq ? λ ; pr ? λ ′ . Then GE ( G k ∅ ) = { δ , . . . , δ } where δ = [ ǫ, pq ! λ ] ∼ , δ = [ ǫ, pq ! λ · pr ! λ ′ ] ∼ , δ = [ ǫ, pq ! λ · pq ? λ ] ∼ , δ = [ ǫ, pq ! λ · pr ! λ ′ · pr ? λ ′ ] ∼ . (3) Let G = pq ! λ ; pr ! λ ′ ; pr ? λ ′ ; pq ? λ . Then GE ( G k ∅ ) = GE ( G k ∅ ) . (4) Let G ′ = pq ? λ ; pr ? λ ′ . Then GE ( G ′ k h p , λ, q i · h p , λ ′ , r i ) = { δ , δ } where δ = [ pq ! λ · pr ! λ ′ , pq ? λ ] ∼ and δ = [ pq ! λ · pr ! λ ′ , pr ? λ ′ ] ∼ . (5) Let G ′ = pr ? λ ′ ; pq ? λ . Then GE ( G ′ k h p , λ, q i · h p , λ ′ , r i ) = GE ( G ′ k h p , λ, q i · h p , λ ′ , r i ) . The following example shows that, due to the possibility of anticipating communica-tions that are independent from a choice and to the fact that sgts are not able to representconcurrency explicitly, two diverging traces in the tree representation of G do not necessar-ily give rise to two conflicting events in GE ( G k M ). Example 7.20 .
Let G = pq ! λ ; rs !( λ ; pq ? λ ; rs ? λ ⊞ λ ; pq ? λ ; rs ? λ ) . Then GE ( G k ∅ ) containsthe g-event [ ǫ, pq ! λ · pq ? λ ] ∼ generated by the two diverging traces of G : pq ! λ · rs ! λ · pq ? λ pq ! λ · rs ! λ · pq ? λ Note on the other hand that if we replace r by q in G , namely if we consider the sgt G ′ = pq ! λ ; qs !( λ ; pq ? λ ; qs ? λ ⊞ λ ; pq ? λ ; qs ? λ ) , then GE ( G ′ k ∅ ) contains δ = [ ǫ, pq ! λ · qs ! λ · pq ? λ ] ∼ and δ ′ = [ ǫ, pq ! λ · qs ! λ · pq ? λ ] ∼ . Here δ δ ′ because pq ! λ · qs ! λ · pq ? λ ) @ q = s ! λ · p ? λ s ! λ · p ? λ = ( pq ! λ · qs ! λ · pq ? λ ) @ q So, here the two occurrences of pq ? λ are represented by two distinct events that are are in conflict. We end this section showing that the obtained ES is a PES.
Proposition 7.21 .
Let G k M be a simple agt. Then S G ( G k M ) is a prime event structure.Proof. We show that ≤ and ≤ follow easily from the properties of concatenation and the properties of thetwo equivalences in Definitions 5.2 and 7.5. As for antisymmetry note that, by Clause (1) ofDefinition 7.16, if [ ω, τ ] ∼ ≤ [ ω, τ ′ ] ∼ and [ ω, τ ′ ] ∼ ≤ [ ω, τ ] ∼ , then τ · τ ≈ ω τ ′ and τ ′ · τ ≈ ω τ for some τ and τ . Hence τ · τ · τ ≈ ω τ , which implies τ = τ = ǫ , i.e. τ ≈ ω τ ′ .The conflict between g-events inherits irreflexivity, symmetry and hereditariness from theconflict between p-events. In particular, for hereditariness, suppose that [ ω, τ ] ∼ ω, τ ′ ] ∼ ≤ [ ω, τ ′′ ] ∼ . Then τ ′′ ≈ ω τ ′ · τ for some τ and τ ′′ @ p = ( τ ′ · τ ) @ p = ( τ ′ @ p ) · ( τ @ p ) τ @ p since τ ′ @ p τ @ p .8. E quivalence of the two E vent S tructure S emantics In this section we establish our main theorem for typed networks, namely the isomorphismbetween the configuration domain of the FES of the network and the configuration domainof the PES of its simple agt. To prove the various results leading to this theorem, wewill largely use the characterisation of configurations as proving sequences, as given inProposition 3.7. Let us briefly sketch how these results are articulated.The proof of the isomorphism is grounded on the Subject Reduction Theorem (Theo-rem 6.18) and the Session Fidelity Theorem (Theorem 6.19). These theorems state that if ⊢ N k M : G k M , then N k M τ −→ N ′ k M ′ if and only if G k M τ −→ G ′ k M ′ , and in bothdirections ⊢ N ′ k M ′ : G ′ k M ′ . We can then relate the ESs of networks and simple agtsby connecting them through the traces of their transition sequences, and by taking intoaccount the queues by means of the mapping otr given by Definition 5.1. This is achievedas follows.If N k M τ −→ and otr ( M ) = ω , then the function nec (Definition 8.8) applied to ω and τ gives a proving sequence in S N ( N k M ) (Theorem 8.11). Vice versa, if ρ ; · · · ; ρ n is aproving sequence in S N ( N k M ), then N k M τ −→ N ′ k M ′ where τ = i / o ( ρ ) · · · i / o ( ρ n ) and i / o is the mapping given in Definition 5.3(2) (Theorem 8.12).Similarly, if G k M τ −→ G ′ k M ′ and otr ( M ) = ω , then the function gec (Definition 8.21)applied to ω and τ gives a proving sequence in S G ( G k M ) (Theorem 8.26). Lastly, if δ ; . . . ; δ n is a proving sequence in S G ( G k M ), then G k M τ −→ G ′ k M ′ , where τ = i / o ( δ ) · . . . · i / o ( δ n ) and i / o is the mapping given in Definition 7.12(2) (Theorem 8.27).It is then natural to split this section in three subsections: the first establishing therelationship between network transition sequences and proving sequences of their eventstructure, the second doing the same for simple agts and finally a third subsection inwhich the isomorphism between the two configuration domains is proved relying on theserelationships. .1. Transition Sequences of Networks and Proving Sequences of their ESs.
We start by showing how network communications a ff ect n-events in the associated ES.To this aim we define two partial operators ♦ and (cid:7) , which applied to a communication β and an n-event ρ yield another n-event ρ ′ (when defined). The intuition is that ρ ′ representsthe event ρ as it was before the communication β , or as it will be after the communication β , respectively. So, in particular, if ρ is not located at play ( β ), its p-event will remainunchanged under both mappings ♦ and (cid:7) , and only its queue will be a ff ected. We shallnow explain in more detail how these operators work.The operator ♦ , when applied to β and ρ , yields the n-event ρ ′ obtained from ρ beforeexecuting the communication β , if it exists. We call β ♦ ρ the retrieval of ρ before β . So, if β = pq ! λ and the queue of ρ ends with pq ! λ , the queue of ρ ′ is obtained by removing pq ! λ from the end of the queue of ρ . Moreover, if ρ is located at p , the p-event of ρ ′ is obtainedby adding q ! λ in front of the p-event of ρ . If β = pq ? λ , the queue of ρ ′ is obtained by adding pq ! λ on top of the queue of ρ . Moreover, if ρ is located at q , the p-event of ρ ′ is obtained byadding q ? λ in front of the p-event of ρ .The operator (cid:7) , when applied to β and ρ , yields the n-event ρ ′ obtained from ρ afterexecuting the communication β , if this event exists. We call β (cid:7) ρ the residual of ρ after β . So,if β = pq ! λ the queue of ρ ′ is obtained from the queue of ρ by enqueuing pq ! λ . Moreover, if ρ is located at p and its p-event starts with the action q ! λ , then the p-event of ρ ′ is obtainedby removing this action, provided the result is still a p-event (this will not happen if the p-event of ρ is a simple action, because in this case it will be consumed by the communication β ); otherwise, the operation is not defined for ρ located at p . If β = pq ? λ and the queue of ρ starts with pq ! λ , the queue of ρ ′ is obtained by dequeuing pq ! λ . Moreover, if ρ is locatedat q and its p-event starts with the action p ? λ , the p-event of ρ ′ is obtained by removing p ? λ , if possible; otherwise, the operation is not defined. Definition 8.1 (Retrieval and residual of an n-event with respect to a communication) . (1) (Retrieval of an n-event before a communication) The operator ♦ applied to a communication β and an n-event ρ is defined by cases on β : pq ! λ ♦ ( r :: (( ω · pq ! λ ) (cid:27) , η )) = r :: ( ω (cid:27) , q ! λ · η ) if r = pr :: ( ω (cid:27) , η ) otherwise pq ? λ ♦ ( r :: ( ω (cid:27) , η )) = r :: (( pq ! λ · ω ) (cid:27) , p ? λ · η ) if r = qr :: (( pq ! λ · ω ) (cid:27) , η ) otherwise (2) (Residual of an n-event after a communication) The operator (cid:7) applied to a communication β and an n-event ρ is defined by cases on β : pq ! λ (cid:7) ( r :: ( ω (cid:27) , η )) = r :: (( ω · pq ! λ ) (cid:27) , η ′ ) if r = p and η = q ! λ · η ′ r :: (( ω · pq ! λ ) (cid:27) , η ) if r , ppq ? λ (cid:7) ( r :: (( pq ! λ · ω ) (cid:27) , η )) = r :: ( ω (cid:27) , η ′ ) if r = q and η = p ? λ · η ′ r :: ( ω (cid:27) , η ) if r , q Notice that in Clause (2) of the above definition η ′ , ǫ , see Definition 4.1. Observe alsothat the operators ♦ and (cid:7) preserve the communication of n-events, namely i / o ( β ♦ ρ ) = i / o ( β (cid:7) ρ ) = i / o ( ρ ).The retrieval and residual operators on n-events induce (partial) mappings on o-traces,which it is handy to define explicitly. efinition 8.2 . The partial mappings β ⊲ ω and β ◮ ω are defined by: (1) pq ! λ ⊲ ω · pq ! λ = ω and pq ? λ ⊲ ω = pq ! λ · ω ; (2) pq ! λ ◮ ω = ω · pq ! λ and pq ? λ ◮ pq ! λ · ω = ω . These mappings enjoy the following commutativity properties. In their proofs andelsewhere we shall use the complement β of an input β defined by β = pq ! λ if β = pq ? λ . Lemma 8.3 .
Let play ( β ) ∩ play ( β ) = ∅ . (1) If both β ⊲ ω and β ⊲ ω are defined, then β ⊲ ( β ⊲ ω ) is defined and β ⊲ ( β ⊲ ω ) (cid:27) β ⊲ ( β ⊲ ω ) . (2) If both β ◮ ω and β ◮ ( β ⊲ ω ) are defined, then β ⊲ ( β ◮ ω ) (cid:27) β ◮ ( β ⊲ ω ) .Proof. (1) Since ω i = β i ⊲ ω is defined for i ∈ { , } , by Definition 8.2(1) ω (cid:27) ω i · β i when β i isan output. Then from play ( β ) ∩ play ( β ) = ∅ we get ω (cid:27) ω ′ · β · β (cid:27) ω ′ · β · β for some ω ′ when both β and β are outputs. Using Definition 8.2(1) we compute: β ⊲ ( β ⊲ ω ) (cid:27) β ⊲ ( β ⊲ ω ) (cid:27) ω ′ if both β and β are outputs β i · ω j if β i is an input and β j is an output β · β · ω if both β and β are inputs(2) Since ω = β ◮ ω is defined, by Definition 8.2(2) ω (cid:27) β · ω when β is an input.Since β ◮ ( β ⊲ ω ) is defined, ω = β ⊲ ω is defined and by Definition 8.2 ω (cid:27) ω · β when β is an output and ω (cid:27) β · ω · β for some ω such that ω (cid:27) β · ω and ω (cid:27) ω · β , when β is an output and β is an input. Using Definition 8.2 we compute: β ⊲ ( β ◮ ω ) (cid:27) β ◮ ( β ⊲ ω ) (cid:27) ω · β if both β and β are outputs ω if β is an output and β is an input β · ω · β if β is an input and β is an output β · ω if both β and β are inputsNotice that the facts that both β ⊲ω and β ◮ ω are defined and play ( β ) ∩ play ( β ) = ∅ do notimply that β ◮ ( β ⊲ ω ) is defined. As an example, consider the case where β is an output, β is the complementary input, i.e. β = β , and ω = β , which gives β ⊲ ω = β ◮ ω = ǫ .The problem here is that β ⊲ ω and β ◮ ω consume the same output β in the queue ω ,hence they are not independent.Using the mappings β ⊲ ω and β ◮ ω , we can write the retrieval and residual of n-eventsin a more concise (but less explicit) form, as stated in the following lemma which usesthe projection of o-traces on participants (Definition 5.4(1)). This form simplifies somestatements and proofs. Lemma 8.4 . (1) If β ♦ r :: ( ω (cid:27) , η ) is defined, then β ♦ r :: ( ω (cid:27) , η ) = r :: (( β ⊲ ω ) (cid:27) , β @ r · η ) . (2) If β (cid:7) r :: ( ω (cid:27) , β @ r · η ) is defined, then β (cid:7) r :: ( ω (cid:27) , β @ r · η ) = r :: (( β ◮ ω ) (cid:27) , η ) . An immediate consequence of this lemma is that the retrieval and residual operatorsare inverse of each other.
Lemma 8.5 . (1) If β ♦ ρ is defined, then β (cid:7) ( β ♦ ρ ) = ρ . (2) If β (cid:7) ρ is defined, then β ♦ ( β (cid:7) ρ ) = ρ . Moreover, the retrieval and residual operators preserve the flow and conflict relations.
Lemma 8.6 . (1) If ρ ≺ ρ ′ and β ♦ ρ is defined, then β ♦ ρ ≺ β ♦ ρ ′ . (2) If ρ ≺ ρ ′ and both β (cid:7) ρ and β (cid:7) ρ ′ are defined, then β (cid:7) ρ ≺ β (cid:7) ρ ′ . If ρ ρ ′ and β ♦ ρ is defined, then β ♦ ρ β ♦ ρ ′ . (4) If ρ ρ ′ and both β (cid:7) ρ and β (cid:7) ρ ′ are defined, then β (cid:7) ρ β (cid:7) ρ ′ .Proof. (1) Since ρ ≺ ρ ′ and β ♦ ρ is defined, also β ♦ ρ ′ is defined. If ρ ≺ ρ ′ , then– either ρ = p :: ( ω (cid:27) , η ) and ρ ′ = p :: ( ω (cid:27) , η ′ ) and η < η ′ ,– or ρ = p :: ( ω (cid:27) , ζ · q ! λ ) and ρ ′ = q :: ( ω (cid:27) , ζ ′ · p ? λ ) and ( ω @ p · ζ ) (cid:31) q v Z w ( ω @ q · ζ ′′ ) (cid:31) p for some ζ ′′ and χ such that ( ζ ′ · p ? λ ) (cid:31) p - ( ζ ′′ · p ? λ · χ ) (cid:31) p .In the first case by Lemma 8.4(1) it is easy to get the result.In the second case we consider β = pq ? λ ′ , the proofs for other choices of β being similarand sometimes simpler. By Definition 8.1(1) β ♦ ρ = p :: (( pq ! λ ′ · ω ) (cid:27) , ζ · q ! λ ) and β ♦ ρ ′ = q ::(( pq ! λ ′ · ω ) (cid:27) , p ? λ ′ · ζ ′ · p ? λ ). Notice that ( pq ! λ ′ · ω ) @ p = q ! λ ′ · ω @ p and ( pq ! λ ′ · ω ) @ q = ω @ q . Then, from ( ω @ p · ζ ) (cid:31) q v Z w ( ω @ q · ζ ′′ ) (cid:31) p we get( q ! λ ′ · ω @ p · ζ ) (cid:31) q v Z w ( ω @ q · p ? λ ′ · ζ ′′ ) (cid:31) p since ( ω @ q · p ? λ ′ · ζ ′′ ) (cid:31) p - ( p ? λ ′ · ω @ q · ζ ′′ ) (cid:31) p . From ( ζ ′ · p ? λ ) (cid:31) p - ( ζ ′′ · p ? λ · χ ) (cid:31) p weget ( p ? λ ′ · ζ ′ · p ? λ ) (cid:31) p - ( p ? λ ′ · ζ ′′ · p ? λ · χ ) (cid:31) p . We conclude β ♦ ρ ≺ β ♦ ρ ′ .(2) The proof is similar to that of Fact (1).(3) Since ρ ρ ′ and β ♦ ρ is defined, also β ♦ ρ ′ is defined. Let ρ = p :: ( ω (cid:27) , η ) and ρ ′ = p :: ( ω (cid:27) , η ′ ) and η η ′ . Let β = qp ? λ , the proofs for other choices of β being similar andsometimes simpler. Letting ω ′ (cid:27) qp ! λ · ω , we get β ♦ ρ = p :: ( ω ′ (cid:27) , q ? λ · η ) and β ♦ ρ ′ = p ::( ω ′ (cid:27) , q ? λ · η ′ ). Since η η ′ implies q ? λ · η q ? λ · η ′ , we conclude β ♦ ρ β ♦ ρ ′ .(4) The proof is similar to that of Fact (3).We now show that the operators ♦ and (cid:7) applied to a communication β modify then-events of an ES in the same way as the (backward or forward) execution of β would doin the underlying network. Lemma 8.7 .
Let N k M β −→ N ′ k M ′ . Then otr ( M ) (cid:27) β ⊲ otr ( M ′ ) and (1) if ρ ∈ NE ( N ′ k M ′ ) , then β ♦ ρ ∈ NE ( N k M ) ; (2) if ρ ∈ NE ( N k M ) and β (cid:7) ρ is defined, then β (cid:7) ρ ∈ NE ( N ′ k M ′ ) .Proof. Let β = pq ! λ . From N k M β −→ N ′ k M ′ we get p [[ L i ∈ I q ! λ i ; P i ]] ∈ N and p [[ P k ]] ∈ N ′ with λ = λ k for some k ∈ I and M ′ ≡ M · h p , λ, q i and r [[ R r ]] ∈ N i ff r [[ R r ]] ∈ N ′ for all r , p .Let β = pq ? λ . From N k M β −→ N ′ k M ′ we get q [[ Σ i ∈ I p ? λ i ; Q i ]] ∈ N and q [[ Q k ]] ∈ N ′ with λ = λ k for some k ∈ I and M ≡ h p , λ, q i · M ′ and r [[ R r ]] ∈ N i ff r [[ R r ]] ∈ N ′ for all r , q .In both cases we get otr ( M ) (cid:27) β ⊲ otr ( M ′ ).(1) Assume ρ = r :: ( ω (cid:27) , η ) ∈ NE ( N ′ k M ′ ). By Definition 5.8(1) we have ω = otr ( M ′ ).Let ω ′ = otr ( M ) (cid:27) β ⊲ ω .Let β = pq ! λ . By Definition 5.8(1) we have η ∈ PE ( P k ) if r = p and η ∈ PE ( R r ) if r , p .By Definition 8.1(1) we get β ♦ ρ = r :: ( ω ′ (cid:27) , q ! λ · η ) if r = p and β ♦ ρ = r :: ( ω ′ (cid:27) , η ) if r , p .By Definition 4.3(1) q ! λ · η ∈ PE ( L i ∈ I q ! λ i ; P i ) and η ∈ PE ( R r ) if r , p . We conclude that β ♦ ρ ∈ NE ( N k M ) by Definition 5.8(1).Let β = pq ? λ . By Definition 5.8(1) we have η ∈ PE ( Q k ) if r = q and η ∈ PE ( R r ) if r , q . ByDefinition 8.1(1) we have β ♦ ρ = r :: ( ω ′ (cid:27) , p ? λ · η ) if r = q and β ♦ ρ = r :: ( ω ′ (cid:27) , η ) if r , q .By Definition 4.3(1) p ? λ · η ∈ PE ( Σ i ∈ I p ? λ i ; Q i ) and η ∈ PE ( R r ) if r , p . We conclude that β ♦ ρ ∈ NE ( N k M ) by Definition 5.8(1).(2) Assume now ρ = r :: ( ω (cid:27) , η ) ∈ NE ( N k M ). By Definition 5.8(1) we have ω = otr ( M ).Let ω ′ = otr ( M ′ ) (cid:27) β ◮ ω . et β = pq ! λ . If r = p and β (cid:7) ρ is defined, then we get β (cid:7) ρ = r :: ( ω ′ (cid:27) , η ′ ) where η = q ! λ · η ′ by Definition 8.1(2). By Definition 5.8(1) we have q ! λ · η ′ ∈ PE ( L i ∈ I q ! λ i ; P i ). ByDefinition 4.3(1) η ′ ∈ PE ( P k ). If r , p , by Definition 8.1(2) we have β (cid:7) ρ = r :: ( ω ′ (cid:27) , η ). ByDefinition 5.8(1) η ∈ PE ( R r ). We conclude that β (cid:7) ρ ∈ NE ( N ′ k M ′ ) by Definition 5.8(1).Let β = pq ? λ . If r = q and β (cid:7) ρ is defined, then by Definition 8.1(2) we get β (cid:7) ρ = r :: ( ω ′ (cid:27) , η ′ )where η = p ? λ · η ′ . By Definition 5.8(1) we have p ? λ · η ′ ∈ PE ( Σ i ∈ I p ? λ i ; Q i ) and η ′ ∈ PE ( Q k ).If r , p , by Definition 8.1(2) we have β (cid:7) ρ = r :: ( ω ′ (cid:27) , η ). By Definition 5.8(1) η ∈ PE ( R r ). Weconclude that β (cid:7) ρ ∈ NE ( N ′ k M ′ ) by Definition 5.8(1).We now define the total function nec , which yields sequences of n-events starting frompairs of o-traces and traces. We use the projection given in Definition 5.4(1). Definition 8.8 (n-events from pairs of o-traces and traces) . We define the sequence of n-events corresponding to ω and τ by nec ( ω, τ ) = ρ ; · · · ; ρ n where ρ i = p i :: ( ω (cid:27) , η i ) if { p i } = play ( τ [ i ]) and η i = τ [1 ... i ] @ p i It is immediate to see that, if τ = pq ! λ or τ = pq ? λ , then nec ( ω, τ ) consists only of then-event p :: ( ω (cid:27) , q ! λ ) or of the n-event q :: ( ω (cid:27) , p ? λ ), respectively, because τ [1 ... = τ [1].We show now that two n-events appearing in the sequence generated from a givenpair ( ω , τ ) cannot be in conflict. Moreover, from nec ( ω, τ ) we can recover τ by means of thefunction i / o of Definition 5.3(2). Lemma 8.9 .
Let nec ( ω, τ ) = ρ ; · · · ; ρ n . (1) If ≤ k , l ≤ n, then ¬ ( ρ k ρ l ) ; (2) τ [ i ] = i / o ( ρ i ) for all i, ≤ i ≤ n.Proof. (1) Let ρ i = p i :: ( ω (cid:27) , η i ) for all i , 1 ≤ i ≤ n . If p k , p l , then ρ k and ρ l cannot be inconflict. If p k = p l , then by Definition 8.8 either η k < η l or η k < η l . So in all cases we have ¬ ( ρ k ρ l ).(2) Immediate from Definition 8.8.The following lemma relates the operators ♦ and (cid:7) with the mapping nec . This will behandy for the proof of Theorem 8.11. Lemma 8.10 . (1)
Let τ = β · τ ′ and ω = β ⊲ ω ′ . If nec ( ω, τ ) = ρ ; · · · ; ρ n and nec ( ω ′ , τ ′ ) = ρ ′ ; · · · ; ρ ′ n , then β ♦ ρ ′ i = ρ i for all i, ≤ i ≤ n. (2) Let τ = β · τ ′ and ω ′ = β ◮ ω . If nec ( ω, τ ) = ρ ; · · · ; ρ n and nec ( ω ′ , τ ′ ) = ρ ′ ; · · · ; ρ ′ n , then β (cid:7) ρ i = ρ ′ i for all i, ≤ i ≤ n.Proof. (1) Let ρ i = p i :: ( ω (cid:27) , η i ) for all i , 1 ≤ i ≤ n and ρ ′ i = p i :: ( ω ′ (cid:27) , η ′ i ) for all i , 2 ≤ i ≤ n . Notethat τ [ i ] = τ ′ [ i −
1] for all i , 2 ≤ i ≤ n . By Definition 8.8 η i = τ [1 ... i ] @ p i = ( β · τ ′ [1 ... i − p i for all i , 1 ≤ i ≤ n and η ′ i = τ ′ [1 ... i −
1] @ p i for all i , 2 ≤ i ≤ n . Then by Lemma 8.4(1) wehave β ♦ ρ ′ i = p i :: (( β ⊲ ω ′ ) (cid:27) , β @ p i · η ′ i ) = p i :: ( ω (cid:27) , η i ) = ρ i for all i , 2 ≤ i ≤ n .(2) From Fact (1) and Lemma 8.5(1).We end this subsection with the two theorems for networks discussed at the beginningof the whole section. Theorem 8.11 . If N k M τ −→ N ′ k M ′ , then nec ( otr ( M ) , τ ) is a proving sequence in S N ( N k M ) . roof. The proof is by induction on τ . Let ω = otr ( M ). Case τ = β . Assume first that β = pq ! λ . From N k M β −→ N ′ k M ′ we get p [[ L i ∈ I q ! λ i ; P i ]] ∈ N with λ = λ k for some k ∈ I . Thus p [[ P k ]] ∈ N ′ and M ′ ≡ M · h p , λ, q i . By Definition 4.3(1) q ! λ ∈ PE ( L i ∈ I q ! λ i ; P i ). By Definition 5.8(1) p :: ( ω (cid:27) , q ! λ ) ∈ NE ( N k M ). By Definition 8.8 nec ( ω, β ) = ρ = p :: ( ω (cid:27) , q ! λ ). Clearly, ρ is a proving sequence in S N ( N k M ), since ρ ≺ ρ would imply ρ = p :: ( ω (cid:27) , η ) for some η such that η < q ! λ , which is not possible.Assume now that β = pq ? λ . In this case we get q [[ Σ i ∈ I p ? λ i ; Q i ]] ∈ N with λ = λ k for some k ∈ I . Thus q [[ Q k ]] ∈ N ′ and M = h p , λ, q i · M ′ . With a similar reasoning as in the previouscase, we obtain nec ( ω, β ) = ρ = q :: ( ω (cid:27) , p ? λ ). Since ω (cid:27) pq ! λ · ω ′ , where ω ′ = otr ( M ′ ),it is immediate to see that ρ is queue-justified. This implies that there is no event ρ in NE ( N k M ) such that ρ ≺ ρ , and thus ρ is a proving sequence in S N ( N k M ). Case τ = β · τ ′ with τ ′ , ǫ . In this case, from N k M τ −→ N ′ k M ′ we get N k M β −→ N ′′ k M ′′ τ ′ −→ N ′ k M ′ for some N ′′ , M ′′ . Let ω ′ = otr ( M ′′ ). By Lemma 8.7 ω = β ⊲ ω ′ . Let nec ( ω, τ ) = ρ ; · · · ; ρ n and nec ( ω ′ , τ ′ ) = ρ ′ ; · · · ; ρ ′ n . By induction nec ( ω ′ , τ ′ ) is a proving sequence in S N ( N ′′ kM ′′ ). By Lemma 8.10(1) β ♦ ρ ′ j = ρ j for all j , 2 ≤ j ≤ n . By Lemma 8.7(1) this implies ρ j ∈ NE ( N k M ) for all j , 2 ≤ j ≤ n . From the proof of the base case we know that ρ = p :: ( ω (cid:27) , β @ p ) ∈ NE ( N k M ) where { p } = play ( β ). What is left to show is that ρ ; · · · ; ρ n is a proving sequence in S N ( N k M ). By Lemma 8.9(1) no two events in this sequence canbe in conflict. Let ρ ∈ NE ( N k M ) and ρ ≺ ρ h for some h , 1 ≤ h ≤ n . As argued in the basecase, this implies h >
1. We distinguish two cases, depending on whether β (cid:7) ρ is definedor not.If β (cid:7) ρ is defined, by Lemma 8.7(2) β (cid:7) ρ ∈ NE ( N ′′ k M ′′ ) and by Lemma 8.6(2) we have β (cid:7) ρ ≺ β (cid:7) ρ h . Let ρ ′ = β (cid:7) ρ . By Lemma 8.10(2) β (cid:7) ρ j = ρ ′ j for all j , 2 ≤ j ≤ n . Thus we have ρ ′ ≺ ρ ′ h . Since nec ( ω ′ , τ ′ ) is a proving sequence in S N ( N ′′ k M ′′ ), by Definition 3.6 there is l < h such that either ρ ′ = ρ ′ l or ρ ′ ρ ′ l ≺ ρ ′ h . In the first case we have ρ = β ♦ ρ ′ = β ♦ ρ ′ l = ρ l .In the second case, from ρ ′ ρ ′ l we deduce ρ ρ l by Lemma 8.6(3), and from ρ ′ l ≺ ρ ′ h wededuce ρ l ≺ ρ h by Lemma 8.6(1).If β (cid:7) ρ is undefined, then by Definition 8.1(2) either ρ = ρ or ρ = p :: ( ω (cid:27) , π · ζ ) with π , β @ p , which implies ρ ρ . In the first case we are done. So, suppose ρ ρ . Let π ′ = β @ p . Since ρ and ρ are events in NE ( N k M ), we may assume π = q ! λ and π ′ = q ! λ ′ and therefore β = pq ! λ ′ . Indeed, we know that play ( β ) = { p } , and β cannot be an input qp ? λ ′ since in this case there should be ρ = p :: ( ω (cid:27) , q ? λ ) ∈ NE ( N k M ) by narrowing, and the twoinput n-events ρ and ρ = p :: ( ω (cid:27) , q ? λ ′ ) could not be both justified by the same queue ω .Note that ρ cannot be a local cause of ρ h , because ρ h = p :: ( ω (cid:27) , π · ζ · η ) would imply ρ h ρ ,contradicting what said above. Therefore ρ is a cross-cause of ρ h , so ρ = p :: ( ω (cid:27) , π · ζ ′ · r ! λ ′′ )and ρ h = r :: ( ω (cid:27) , ζ ′′ · p ? λ ′′ ). We know that ρ h = β ♦ ρ ′ h . By Definition 8.1(1) we have ρ ′ h = r :: (( ω · β ) (cid:27) , ζ ′′ · p ? λ ′′ ), because r is the receiver of a message sent by p and thus byconstruction r , p . Since ρ ′ h is an input n-event in NE ( N ′′ k M ′′ ), it must either be justified bythe queue ω · β or have a cross-cause in NE ( N ′′ k M ′′ ). Since ρ h is not justified by the queue ω (because ρ ≺ ρ h ), the only way for ρ ′ h to be justified by ω · β would be that pr ! λ ′′ = β , thatis r = q and λ ′′ = λ ′ , and that ( ∗ ) ( ω @ p ) (cid:31) q v Z w ζ (cid:31) p and ( ζ ′′ · p ? λ ′ ) (cid:31) p v ( ζ · p ? λ ′ · χ ) (cid:31) p for some ζ and χ , see Definition 5.7. This means that ζ (cid:31) p is the subsequence of ζ ′′ (cid:31) p obtained by keeping all and only its inputs. Now, if ρ ′ h = q :: (( ω · β ) (cid:27) , ζ ′′ · p ? λ ′ ), then h = q :: ( ω (cid:27) , ζ ′′ · p ? λ ′ ). Since ρ = p :: ( ω (cid:27) , q ! λ · ζ ′ · q ! λ ′ ) is a cross-cause of ρ h , we have( ∗∗ ) ( ω @ p · q ! λ · ζ ′ ) (cid:31) q v Z w ( ω @ q · ζ · χ ′ ) (cid:31) p and ( ζ ′′ · p ? λ ′ ) (cid:31) p v ( ζ · p ? λ ′ · χ ′ ) (cid:31) p forsome ζ and χ ′ , see Definition 5.6(1)(b). It follows that the inputs in ζ (cid:31) p coincide withthe inputs in ζ ′′ (cid:31) p and thus with those in ζ (cid:31) p . From (*) we know that all inputs in ζ (cid:31) p match some output in ( ω @ p ) (cid:31) q . Therefore no input in ( ω @ q · ζ · χ ′ ) (cid:31) p can match theoutput q ! λ in ( ω @ p · q ! λ · ζ ′ ) (cid:31) q , contradicting (**). Hence ρ ′ h must have a cross-cause in NE ( N ′′ k M ′′ ). Let ρ ′ be such a cross-cause. Then ρ ′ = p :: (( ω · β ) (cid:27) , ζ · r ! λ ′′ ) for some ζ .Since nec ( ω ′ , τ ′ ) is a proving sequence in S N ( N ′′ k M ′′ ), by Definition 3.6 there is l < h suchthat either ρ ′ = ρ ′ l or ρ ′ ρ ′ l ≺ ρ ′ h . In the first case β ♦ ρ ′ = β ♦ ρ ′ l = ρ l ≺ ρ h , and ( β ♦ ρ ′ ) ρ because β ♦ ρ ′ = p :: ( ω (cid:27) , π ′ · ζ · r ! λ ′′ ). In the second case, let ρ ′ l = p :: (( ω · β ) (cid:27) , η ) for some η . From ρ ′ ρ ′ l ≺ ρ ′ h we derive β ♦ ρ ′ β ♦ ρ ′ l ≺ β ♦ ρ ′ h by Lemma 8.6(3) and (1). This implies ρ l = β ♦ ρ ′ l = p :: ( ω (cid:27) , π ′ · η ). Hence ρ ρ l ≺ ρ h . Theorem 8.12 . If ρ ; · · · ; ρ n is a proving sequence in S N ( N k M ) , then N k M τ −→ N ′ k M ′ where τ = i / o ( ρ ) · · · i / o ( ρ n ) .Proof. The proof is by induction on the length n of the proving sequence. Let ω = otr ( M ). Case n = . Let i / o ( ρ ) = β where β = pq ? λ . The proof for β = pq ! λ is similar and simpler. ByDefinition 5.8(1) ρ = q :: ( ω (cid:27) , ζ · p ? λ ). Note that it must be ζ = ǫ , since otherwise we wouldhave q :: ( ω (cid:27) , ζ ) ∈ NE ( N k M ) by narrowing, where q :: ( ω (cid:27) , ζ ) ≺ ρ by Definition 5.6(1)(a),contradicting the hypothesis that ρ is minimal. Moreover, ρ cannot be justified by anoutput n-event ρ ∈ NE ( N k M ), because this would imply ρ ≺ ρ , contradicting againthe minimality of ρ . Hence, by Definition 5.8(1) ρ = q :: ( ω (cid:27) , p ? λ ) must be queue-justified, which means that ω (cid:27) pq ! λ · ω ′ . Thus M ≡ h p , λ, q i · M ′ , where otr ( M ′ ) = ω ′ . ByDefinition 4.3(1) and Definition 5.8(1) we have N ≡ q [[ Σ i ∈ I p ? λ i ; Q i ]] k N where λ k = λ forsome k ∈ I . We may then conclude that N k M β −→ q [[ Q k ]] k N k M ′ = N ′ k M ′ . Case n > . Let i / o ( ρ ) = β and N k M β −→ N ′′ k M ′′ be the corresponding transitionas obtained from the base case. We show that β (cid:7) ρ j is defined for all j , 2 ≤ j ≤ n . If β (cid:7) ρ k were undefined for some k , 2 ≤ k ≤ n , then by Definition 8.1(2) either ρ k = ρ or ρ k = p :: ( ω (cid:27) , π · ζ ) where { p } = play ( β ) and π , β @ p , which implies ρ k ρ . So both casesare impossible. Thus, by Lemma 8.7(2) we may define ρ ′ j = β (cid:7) ρ j ∈ NE ( N ′′ k M ′′ ) for all j ,2 ≤ j ≤ n . We show that ρ ′ ; · · · ; ρ ′ n is a proving sequence in S N ( N ′′ k M ′′ ). By Lemma 8.5(2) ρ j = β ♦ ρ ′ j for all j , 2 ≤ j ≤ n . Then by Lemma 8.6(3) no two n-events in the sequence ρ ′ ; · · · ; ρ ′ n can be in conflict.Let ρ ∈ NE ( N ′′ k M ′′ ) and ρ ≺ ρ ′ h for some h , 2 ≤ h ≤ n . By Lemma 8.7(1) β ♦ ρ ∈ NE ( N k M )and then by Lemma 8.6(1) β ♦ ρ ≺ β ♦ ρ ′ h = ρ h . Let ρ ′ = β ♦ ρ . Therefore ρ ′ ≺ ρ h . Since ρ ; · · · ; ρ n is a proving sequence in S N ( N k M ), by Definition 3.6 there is l < h such thateither ρ ′ = ρ l or ρ ′ ρ l ≺ ρ h . In the first case, by Lemma 8.5(1) we get ρ = β (cid:7) ρ ′ = β (cid:7) ρ l = ρ ′ l .In the second case, by Lemma 8.6(2) and (4) we get ρ ρ ′ l ≺ ρ ′ h .We have shown that ρ ′ ; · · · ; ρ ′ n is a proving sequence in S N ( N ′′ k M ′′ ). By induction N ′′ k M ′′ τ ′ −→ N ′ k M ′ where τ ′ = i / o ( ρ ′ ) · · · i / o ( ρ ′ n ). Let τ = i / o ( ρ ) · . . . · i / o ( ρ n ). Since i / o ( ρ ′ j ) = i / o ( ρ j ) for all j , ≤ j ≤ n , we have τ = β · τ ′ . Hence N k M β −→ N ′′ k M ′′ τ ′ −→ N ′ k M ′ is the required transition sequence. .2. Transition Sequences of Global Types and Proving Sequences of their ESs.
We introduce two operators ◦ and • for global events, which play the same role as theoperators ♦ and (cid:7) for network events. In defining these operators we must make sure that,in the resulting g-event [ ω ′ , τ ′ ] ∼ , the trace τ ′ is ω ′ -pointed, see Definition 7.12(1) and (2).Let us start with the formal definition, and then we shall explain it in detail. Definition 8.13 (Retrieval and residual of a g-event with respect to a communication) . (1) (Retrieval of a g-event before a communication) The operator ◦ applied to a communication β and a g-event δ is defined by cases on β : pq ! λ ◦ [ ω · pq ! λ, τ ] ∼ = [ ω, pq ! λ · τ ] ∼ if pq ! λ · τ is ω -pointed [ ω, τ ] ∼ otherwise pq ? λ ◦ [ ω, τ ] ∼ = [ pq ! λ · ω, pq ? λ · τ ] ∼ if q ∈ play ( τ )[ pq ! λ · ω, τ ] ∼ if q < play ( τ )(2) (Residual of a g-event after a communication) The operator • applied to a communication β anda g-event δ is defined by cases on β : pq ! λ • [ ω, τ ] ∼ = [ ω · pq ! λ, τ ′ ] ∼ if τ ≈ ω pq ! λ · τ ′ with τ ′ , ǫ [ ω · pq ! λ, τ ] ∼ if p < play ( τ ) pq ? λ • [ pq ! λ · ω, τ ] ∼ = [ ω, τ ′ ] ∼ if τ ≈ pq ! λ · ω pq ? λ · τ ′ and τ ′ , ǫ [ ω, τ ] ∼ if q < play ( τ )Note that the operators ◦ and • preserve the communication of g-events, namely i / o ( β ◦ δ ) = i / o ( β • δ ) = i / o ( δ ). It is also immediate to see that they transform the o-trace as the operators ♦ and (cid:7) , see Definition 8.2. So we will explain only the transformation of the trace τ .Consider first the case of pq ? λ ◦ [ ω, τ ] ∼ , which is the simplest one. To obtain the retrievalof [ ω, τ ] ∼ before pq ? λ , we start by prefixing the trace τ by the input pq ? λ . The resultingtrace will be ω -pointed if and only if the added input is a local cause of some subsequentcommunication, and this holds precisely when q ∈ play ( τ ). If this is not the case, then weremove the new input pq ? λ from the trace. Consider now the case of pq ! λ ◦ [ ω · pq ! λ, τ ] ∼ .Here we extend the trace as expected, but checking the pointedness condition is moreinvolved, since we need to check that the added output is either a local cause or a cross-cause of some communication in τ . Note that we do not need to check that all the otheroutputs in τ still satisfy the 2nd condition of pointedness, since, letting ω ′ = ω · pq ! λ and τ ′ = pq ! λ · τ , for any output rs ! λ occurring in τ we have that the number of outputs from r to s preceding it in ω · τ ′ is equal to the number of outputs from r to s preceding it in ω ′ · τ .Therefore we could have replaced the condition “if pq ! λ · τ is ω -pointed ′′ by the moredetailed (but equivalent) condition “if p ∈ play ( τ ) or 1 + | ω | ∝ ω · pq ! λ · τ j + | ω | for some j > pq ! λ, pq ? λ ] ∼ , where ω = pq ! λ and τ = pq ? λ , we have p < play ( τ ), but 1 ∝ pq ! λ · pq ? λ
2, thus pq ! λ ◦ [ pq ! λ, pq ? λ ] ∼ = [ ǫ, pq ! λ · pq ? λ ] ∼ . On the other hand, for the g-event [ pq ! λ, rs ! λ ′ · rs ? λ ′ ] ∼ , where ω = pq ! λ and τ = rs ! λ ′ · rs ? λ ′ , we have p < play ( τ ) and ¬ (1 ∝ pq ! λ · rs ! λ ′ · rs ? λ ′
2) and ¬ (1 ∝ pq ! λ · rs ! λ ′ · rs ? λ ′ pq ! λ ◦ [ pq ! λ, rs ! λ ′ · rs ? λ ′ ] ∼ = [ ǫ, rs ! λ ′ · rs ? λ ′ ] ∼ . Note that pq ! λ ◦ [ ω, τ ] ∼ is undefined whenthe queue ω does not end with pq ! λ , while pq ? λ ◦ [ ω, τ ] ∼ is always defined. ext, consider the definition of pq ! λ • [ ω, τ ] ∼ . If the first communication with player p in τ is the output pq ! λ , then this output can be brought to the head of the trace usingthe equivalence ≈ ω . In this case, we obtain the residual of [ ω, τ ] ∼ after pq ! λ by removingthe message pq ! λ from the head of the trace, provided this does not result in the emptytrace (otherwise, the residual is undefined). Then, letting ω ′ = ω · pq ! λ , it is easy to seethat the trace τ ′ is ω ′ -pointed, since it is a su ffi x of τ = pq ! λ · τ ′ which is ω -pointed (seeLemma 7.9). On the other hand, if p < play ( τ ), then the residual of [ ω, τ ] ∼ after pq ! λ issimply obtained by leaving the trace unchanged. In this case, letting again ω ′ = ω · pq ! λ ,the ω ′ -pointedness of τ follows immediately from its ω -pointedness, since τ contains nooutput from p to q and therefore the addition of the output pq ! λ at the end of the queuedoes not a ff ect the pointedness of τ . For instance, consider the g-event [ pr ! λ ′ , pr ? λ ′ ] ∼ where ω = pr ! λ ′ and τ = pr ? λ ′ . Observe that p occurs in τ , but p < play ( τ ). Then we have pq ! λ • [ pr ! λ ′ , pr ? λ ′ ] ∼ = [ pr ! λ ′ · pq ! λ, pr ? λ ′ ] ∼ .The case of pq ? λ • [ pq ! λ · ω, τ ] ∼ is entirely similar.It is easy to verify that the definitions of ◦ and • given in Definition 8.13 may berewritten in the more concise form given in the following lemma, which is analogous toLemma 8.4. Lemma 8.14 . (1) If β ◦ [ ω, τ ] ∼ is defined, then ( β ⊲ ω ) · β · τ is well formed and β ◦ [ ω, τ ] ∼ = [ β ⊲ ω, β ⌈ ( β⊲ω ) τ ] ∼ (2) If β • [ ω, β ⌈ ω τ ] ∼ is defined, then ω · β · τ is well formed and β • [ ω, β ⌈ ω τ ] ∼ = [ β ◮ ω, τ ] ∼ We shall now relate the operators ◦ and • with the function ev , which builds g-events,see Definition 7.14. To this end, we first prove the following lemma, which shows how thefiltering of a trace gets a ff ected when the trace is prefixed by a communication. Lemma 8.15 . (1)
Let β ⊲ ω be defined and ω ′ = β ⊲ ω . Let τ, τ ′ be such that τ ′ is ( ω ′ · β · τ ) -pointed.Then ( β · τ ) ⌈ ω ′ τ ′ = β ⌈ ω ′ ( τ ⌈ ω τ ′ )(2) Let β ◮ ω be defined and ω ′ = β ◮ ω . Let τ, τ ′ be such that τ ′ is ( ω · β · τ ) -pointed. Then ( β · τ ) ⌈ ω τ ′ = β ⌈ ω ( τ ⌈ ω ′ τ ′ ) Proof. (1) We show ( β · τ ) ⌈ ω ′ τ ′ = β ⌈ ω ′ ( τ ⌈ ω τ ′ ) by induction on τ . Case τ = ǫ . In this case both the LHS and RHS reduce to β ⌈ ω ′ τ ′ , for whatever ω . Case τ = τ ′′ · β ′ . By Definition 7.13 we obtain for the LHS:( β · τ ′′ · β ′ ) ⌈ ω ′ τ ′ = ( β · τ ′′ ) ⌈ ω ′ ( β ′ · τ ′ ) if β ′ · τ ′ is ( ω ′ · β · τ ′′ )-pointed( β · τ ′′ ) ⌈ ω ′ τ ′ otherwiseBy Definition 7.13 (applied to the internal filtering) we obtain for the RHS: β ⌈ ω ′ (( τ ′′ · β ′ ) ⌈ ω τ ′ ) = β ⌈ ω ′ ( τ ′′ ⌈ ω ( β ′ · τ ′ )) if β ′ · τ ′ is ( ω · τ ′′ )-pointed β ⌈ ω ′ ( τ ′′ ⌈ ω τ ′ ) otherwiseWe distinguish two cases, according to whether β is an input or an output.Suppose first that β is an output. Then ω = ω ′ · β . The side condition, i.e. the requirementthat β ′ · τ ′ be ( ω · τ ′′ )-pointed, is the same in both cases. We may then immediately concludethat LHS = RHS using the induction hypothesis.Suppose now that β is an input. Then ω ′ = β · ω . Observe that, since ( ω · τ ”) is obtainedfrom ( ω ′ · β · τ ′′ ) = ( β · ω · β · τ ′′ ) by erasing a pair of matching communications, ( β ′ · τ ′ ) is ω · τ ′′ )-pointed if and only if ( β ′ · τ ′ ) is ( ω ′ · β · τ ′′ )-pointed. Then we may again concludeby induction.(2) follows from (1) since β ⊲ ( β ◮ ω ) = ω .We may now prove the following: Lemma 8.16 . (1) If β ⊲ ω is defined, then β ◦ ev ( ω, τ ) = ev ( β ⊲ ω, β · τ ) . (2) If τ , ǫ and β ◮ ω is defined, then β • ev ( ω, β · τ ) = ev ( β ◮ ω, τ ) .Proof. Definition 7.14 and Lemmas 8.14 and 8.15 with τ ′ = ǫ imply (1) and (2) since:(1) β ◦ ev ( ω, τ ) = β ◦ [ ω, τ ⌈ ω ǫ ] ∼ by Definition 7.14 = [ ω ′ , β ⌈ ω ′ ( τ ⌈ ω ǫ )] ∼ by Lemma 8.14(1) = [ ω ′ , ( β · τ ) ⌈ ω ′ ǫ ] ∼ by Lemma 8.15(1) ev ( ω ′ , β · τ ) = [ ω ′ , ( β · τ ) ⌈ ω ′ ǫ ] ∼ by Definition 7.14 where ω ′ = β ⊲ ω (2) β • ev ( ω, β · τ ) = β • [ ω, ( β · τ ) ⌈ ω ǫ ] ∼ by Definition 7.14 = β • [ ω, β ⌈ ω ( τ ⌈ ω ′ ǫ )] ∼ by Lemma 8.15(2) = [ ω ′ , τ ⌈ ω ′ ǫ ] ∼ by Lemma 8.14(2) ev ( ω ′ , τ ) = [ ω ′ , τ ⌈ ω ′ ǫ ] ∼ by Definition 7.14 where ω ′ = β ◮ ω Lemma 8.17 is the analogous of Lemma 8.5 as regards the first two statements. Theremaining two statements establish some commutativity properties of the mappings ◦ and • when applied to two communications with di ff erent players. These properties rely onthe corresponding commutativity properties for the mappings ⊲ and ◮ on o-traces, givenin Lemma 8.3. Note that these properties are needed for ◦ and • whereas they were notneeded for ♦ and (cid:7) , because the Rules [IC omm -O ut ] and [IC omm -I n ] of Figure 5 allowtransitions to occur inside sgts, whereas the LTS for networks only allows transitionsfor top-level communications. In fact Lemma 8.17(3) and (4) are used in the proof ofLemma 8.20. Lemma 8.17 . (1) If β ◦ δ is defined, then β • ( β ◦ δ ) = δ . (2) If β • δ is defined, then β ◦ ( β • δ ) = δ . (3) If both β ◦ δ , β ◦ δ are defined, and play ( β ) ∩ play ( β ) = ∅ , then β ◦ ( β ◦ δ ) is defined and β ◦ ( β ◦ δ ) = β ◦ ( β ◦ δ ) . (4) If both β • δ , β • ( β ◦ δ ) are defined, and play ( β ) ∩ play ( β ) = ∅ , then β ◦ ( β • δ ) = β • ( β ◦ δ ) .Proof. Statements (1) and (2) immediately follow from Lemma 8.14. In the proofs of theremaining statements we convene that “ β is required in τ · β · τ ” is short for “the shownoccurrence of β is required in τ · β · τ ” and similarly for “ β matches an output in τ · β · τ ”.(3) Let δ = [ ω, τ ] ∼ . Since β i ◦ δ is defined for i ∈ { , } , by Lemma 8.14(1) ω i = β i ⊲ ω isdefined for i ∈ { , } . Then by Lemma 8.3(1) β ⊲ ( β ⊲ ω ) (cid:27) β ⊲ ( β ⊲ ω ). Let ω ′ = β ⊲ ( β ⊲ ω ).Using Lemma 8.14(1) we get for i ∈ { , } δ i = β i ◦ [ ω, τ ] ∼ = [ ω i , β i ⌈ ω i τ ] ∼ Using again Lemma 8.14(1) we get β ◦ δ = β ◦ [ ω , β ⌈ ω τ ] ∼ = [ ω ′ , β ⌈ ω ′ ( β ⌈ ω τ )] ∼ Similarly ◦ δ = β ◦ [ ω , β ⌈ ω τ ] ∼ = [ ω ′ , β ⌈ ω ′ ( β ⌈ ω τ )] ∼ We want to prove that ( ∗ ) β ⌈ ω ′ ( β ⌈ ω τ ) ≈ ω ′ β ⌈ ω ′ ( β ⌈ ω τ )In the proof of (*) we will use the following facts, where h , k = , h , k :(a) β h · β k · τ ≈ ω ′ β k · β h · τ ;(b) if β h · τ is ω ′ -pointed and β k · τ is not ω k -pointed, then β h · τ is ω h -pointed;(c) if β h · τ is ω h -pointed and β k · τ is not ω k -pointed, then β h · τ is ω ′ -pointed;(d) β h · β k · τ is ω ′ -pointed i ff β h · τ is ω h -pointed and β k · τ is ω k -pointed. Fact (a) . We show that β h · β k · τ ω ′ -swaps to β k · β h · τ . By hypothesis play ( β h ) ∩ play ( β k ) = ∅ ,so it is enough to show that β k does not match β h in the trace ω ′ · β h · β k · τ = ( β h ⊲ ( β k ⊲ω )) · β h · β k · τ .Suppose that β h is an output and β k is an input such that β k = β h . Since δ h = β h ◦ δ is definedand β h is an output, it must be ω (cid:27) ω h · β h . Then, since δ k = β k ◦ δ is defined and β k is an inputand β k = β h , we get β k ⊲ ω = β k · ω (cid:27) β k · ω h · β h (cid:27) β h · ω h · β h . Then ω ′ = β h ⊲ ( β k ⊲ ω ) (cid:27) β h · ω h .Clearly, β k matches the initial output β h in the trace ω ′ · β h · β k · τ , since β k is the first inputin the trace and the initial β h is the first complementary output in the trace. Therefore β k does not match its adjacent output β h . Fact (b) . If β h is required in β h · τ - a condition that is always true when β h is an input and β h · τ is ω ′ -pointed - then β h · τ is ω -pointed for all ω .We may then assume that β h is an output that is not required in β h · τ .If β k is an output, then ω h (cid:27) ω ′ · β k . If an input matches β h in ω ′ · β h · τ , then the same inputmatches β h in ω h · β h · τ , since play ( β h ) ∩ play ( β k ) = ∅ .If β k is an input, then ω ′ (cid:27) β k · ω h . Suppose β h = pq ! λ and β k = rs ? λ ′ . Observe that it mustbe q , s , because otherwise no input pq ? λ could occur in τ , since β k · τ is not ω k -pointed,contradicting the hypotheses that β h · τ is ω ′ -pointed and β h is not required in β h · τ . Thenthe presence of β k = rs ! λ cannot a ff ect the multiplicity of pq ! or pq ? in any trace. Therefore,if an input matches β h in ω ′ · β h · τ , then the same input matches β h in ω h · β h · τ . Fact (c) . Again, we may assume that β h is an output that is not required in β h · τ .If β k is an output, then ω h (cid:27) ω ′ · β k . If an input matches β h in ω h · β h · τ , then the same inputmatches β h in ω ′ · β h · τ , since play ( β h ) ∩ play ( β k ) = ∅ .If β k is an input, then ω ′ (cid:27) β k · ω h . Let β h = pq ! λ and β k = rs ? λ ′ . Again, it must be q , s ,because otherwise no input pq ? λ could occur in τ , since β k · τ is not ω k -pointed, contradict-ing the hypotheses that β h · τ is ω h -pointed and β h is not required in β h · τ . Therefore, if aninput matches β h in ω h · β h · τ , then the same input matches β h in ω ′ · β h · τ . Fact (d) . From play ( β h ) ∩ play ( β k ) = ∅ it follows that β h is required in β h · β k · τ i ff β h is requiredin β h · τ , and similarly for β k . Let us then assume that β h and β k are not both required in β h · β k · τ , i.e., that at least one of them is an output not required in β h · β k · τ .If both β h and β k are outputs, then ω h (cid:27) ω ′ · β k . Then an input matches β h in ω ′ · β h · β k · τ i ff the same input matches β h in ω h · β h · τ , since β h · β k · τ ≈ ω ′ β k · β h · τ by Fact (a).Let β h = pq ! λ and β k = rs ? λ ′ , where β h is not required in β h · β k · τ . Then ω ′ (cid:27) β k · ω h .Therefore an input matches β h in ω ′ · β h · β k · τ i ff the same input matches β h in ω h · β h · τ ,since β h · β k · τ ≈ ω ′ β k · β h · τ by Fact (a).We proceed now to prove (*). We distinguish three cases, according to whether:i) each β i · τ is ω i -pointed, for i = , β i · τ is ω i -pointed, for i = , β h · τ is ω h -pointed and β k · τ is not ω k -pointed, for h , k = , h , k . ase i). Suppose each β i · τ is ω i -pointed, for i = ,
2. Then β ⌈ ω ′ ( β ⌈ ω τ ) ≈ ω ′ β ⌈ ω ′ β · τ and β ⌈ ω ′ ( β ⌈ ω τ ) ≈ ω ′ β ⌈ ω ′ β · τ . By Fact (d) both β · β · τ and β · β · τ are ω ′ -pointed.Then β ⌈ ω ′ β · τ ≈ ω ′ β · β · τ and β ⌈ ω ′ β · τ ≈ ω ′ β · β · τ . By Fact (a) β · β · τ ≈ ω ′ β · β · τ . Case ii). Suppose no β i · τ is ω i -pointed, for i = ,
2. Then β ⌈ ω ′ ( β ⌈ ω τ ) ≈ ω ′ β ⌈ ω ′ τ and β ⌈ ω ′ ( β ⌈ ω τ ) ≈ ω ′ β ⌈ ω ′ τ . By Fact (b), no β i · τ can be ω ′ -pointed, for i ∈ { , } . Hence β ⌈ ω ′ τ ≈ ω ′ τ ≈ ω ′ β ⌈ ω ′ τ . Case iii). Suppose β h · τ is ω h -pointed and β k · τ is not ω k -pointed, for h , k = , h , k .Then β h ⌈ ω ′ ( β k ⌈ ω k τ ) ≈ ω ′ β h ⌈ ω ′ τ and β k ⌈ ω ′ ( β h ⌈ ω h τ ) ≈ ω ′ β k ⌈ ω ′ β h · τ . By Fact (c) β h · τ is ω ′ -pointed. Hence β h ⌈ ω ′ τ ≈ ω ′ β h · τ . By Fact (d) β k · β h · τ is not ω ′ -pointed. Therefore β k ⌈ ω ′ β h · τ ≈ ω ′ β h · τ .(4) Let δ = [ ω, τ ] ∼ . Since both β • δ and β • ( β ◦ δ ) are defined, by Lemma 8.14 both β ◮ ω and β ◮ ( β ⊲ ω ) must be defined. Then, by Lemma 8.3(2) β ◮ ( β ⊲ ω ) (cid:27) β ⊲ ( β ◮ ω ).So we set ω ′ = β ⊲ ( β ◮ ω ). Let ω = β ⊲ ω . By Definition 8.13(1) we get δ = β ◦ δ = [ ω , β · τ ] ∼ if β · τ is ω -pointed[ ω , τ ] ∼ otherwiseLet ω = β ◮ ω . By Definition 8.13(2) we get δ = β • δ = [ ω , τ ′ ] ∼ if τ ≈ ω β · τ ′ [ ω , τ ] ∼ if play ( β ) ∩ play ( τ ) = ∅ The remainder of this proof is split into two cases, according to the shape of δ . Case δ = [ ω , τ ] ∼ . Then play ( β ) ∩ play ( τ ) = ∅ . By Definition 8.13(1) we get β ◦ δ = [ ω ′ , β · τ ] ∼ if β · τ is ω ′ -pointed[ ω ′ , τ ] ∼ otherwiseSince play ( β ) ∩ play ( β · τ ) = ∅ , by Definition 8.13(2) we get β • δ = [ ω ′ , β · τ ] ∼ if β · τ is ω -pointed[ ω ′ , τ ] ∼ otherwiseWe have to show that ( ∗∗ ) β · τ is ω ′ -pointed i ff β · τ is ω -pointedIf β is an input, it must be required in τ for both ω ′ -pointedness and ω -pointedness, sothis case is obvious.Let β = pq ! λ .If β is an output, then ω ′ (cid:27) ω · β by Definition 8.2(2). Since β , pq ! λ ′ for all λ ′ , an inputin τ matches β in ω · β · β · τ i ff it matches β in ω · β · τ .If β is an input, then ω (cid:27) β · ω ′ by Definition 8.2(2). If β , pq ? λ ′ for all λ ′ , then an inputin τ matches β in ω ′ · β · τ i ff it matches β in β · ω ′ · β · τ . Let β = pq ? λ ′ for some λ ′ .Since play ( β ) ∩ play ( τ ) , ∅ , there is no input β in τ such that β matches β in ω ′ · β · τ orin β · ω ′ · β · τ . This concludes the proof of ( ∗∗ ). Case δ = [ ω , τ ′ ] ∼ . Then τ ≈ ω β · τ ′ . By Definition 8.13(1) we get β ◦ δ = [ ω ′ , β · τ ′ ] ∼ if β · τ ′ is ω ′ -pointed[ ω ′ , τ ′ ] ∼ otherwiseand, since δ = [ ω, β · τ ′ ] ∼ , by the same definition we get β ◦ δ = [ ω , β · β · τ ′ ] ∼ if β · β · τ ′ is ω -pointed[ ω , β · τ ′ ] ∼ otherwiseWe first show that β · β · τ ′ ≈ ω β · β · τ ′ . Since β ◦ δ is defined, the trace ω · β · β · τ ′ is well formed by Lemma 8.14(1). So β cannot be a matching input for β . To show that β annot be a matching input for β observe that, if it were, then β = β . Since β • ( β ◦ δ ) isdefined we have that ω ≡ β · ω ′ by Definition 8.2(2). Therefore β cannot be a matchinginput for β in β · ω ′ · β · β · τ ′ , since it is the matching input of the first β . From this and play ( β ) ∩ play ( β ) = ∅ we get that β · β · τ ′ ≈ ω β · β · τ ′ . Therefore β ◦ δ = [ ω , β · β · τ ′ ] ∼ if β · β · τ ′ is ω -pointed[ ω , β · τ ′ ] ∼ otherwiseand by Definition 8.13(2) β • δ = [ ω ′ , β · τ ′ ] ∼ if β · β · τ ′ is ω -pointed[ ω ′ , τ ′ ] ∼ otherwiseWe have to show that( ∗ ∗ ∗ ) β · τ ′ is ω ′ -pointed i ff β · β · τ ′ is ω -pointedNote that β is required in τ ′ i ff it is required in β · τ ′ since play ( β ) ∩ play ( β ) = ∅ . Thereforethe result is immediate when β is an input.Let β be an output.If β is an output, then ω ′ (cid:27) ω · β by Definition 8.2(2). Suppose that β · τ ′ is ω ′ -pointed,where τ ′ = τ ′ · β · τ ′′ and β matches β in ω · β · β · τ ′ · β · τ ′′ . Then, since β · β · τ ′ ≈ ω β · β · τ ′ ,we have that β matches β in ω · β · β · τ ′ · β · τ ′′ . In a similar way we can prove that, ifan input β matches β in ω · β · β · τ ′ · β · τ ′′ , then β matches β in ω · β · β · τ ′ · β · τ ′′ .If β is an input, then ω (cid:27) β · ω ′ by Definition 8.2(2). Suppose that β · τ ′ is ω ′ -pointed,where τ ′ = τ ′ · β · τ ′′ and β matches β in ω ′ · β · τ ′ · β · τ ′′ . Then β matches β in β · ω ′ · β · β · τ ′ · β · τ ′′ , since β is the first input in the trace and it matches β . In a similarway we can prove that, if an input β matches β in β · ω ′ · β · β · τ ′ · β · τ ′′ , then β matches β in ω ′ · β · τ ′ · β · τ ′′ . Therefore ( ∗ ∗ ∗ ) holds.The next lemma shows that the retrieval and residual operators on g-events preservecausality and that the retrieval operator preserves conflict. It is the analogous of Lemma 8.6,but without the statement corresponding to Lemma 8.6(4), which is true but not requiredfor later results. The di ff erence is due to the fact that ESs of networks are FESs, while thoseof simple agts are PESs. This appears clearly when looking at the proof of Theorem 8.12which uses Lemma 8.6(4), while that of Theorem 8.27 does not need the correspondingproperty. Lemma 8.18 . (1) If δ < δ and β ◦ δ is defined, then β ◦ δ < β ◦ δ . (2) If δ < δ and both β • δ , β • δ are defined, then β • δ < β • δ . (3) If δ δ and both β ◦ δ , β ◦ δ are defined, then β ◦ δ β ◦ δ .Proof. (1) Since δ < δ and β ◦ δ is defined, then also β ◦ δ is defined. Let δ = [ ω, τ ] ∼ and δ = [ ω, τ · τ ′ ] ∼ . If β ◦ δ = [ ω ′ , τ ] ∼ and β ◦ δ = [ ω ′ , τ · τ ′ ] ∼ for some ω ′ , then β ◦ δ < β ◦ δ .Let β be an output. Then ω (cid:27) ω ′ · β . If β ◦ δ = [ ω ′ , β · τ ] ∼ , then it must be β ◦ δ = [ ω ′ , β · τ · τ ′ ] ∼ .Thus β ◦ δ < β ◦ δ . The only other case is β ◦ δ = [ ω ′ , τ ] ∼ and β ◦ δ = [ ω ′ , β · τ · τ ′ ] ∼ . Since β ◦ δ = [ ω ′ , τ ] ∼ , the trace β · τ is not ω ′ -pointed, so play ( β ) * play ( τ ) and τ does not contain thematching input of β . Therefore β · τ · τ ′ ≈ ω ′ τ · β · τ ′ and β ◦ δ = [ ω ′ , β · τ · τ ′ ] ∼ = [ ω ′ , τ · β · τ ′ ] ∼ ,so β ◦ δ < β ◦ δ .Let β be an input. If β ◦ δ = [ β · ω, β · τ ] ∼ , then it must be β ◦ δ = [ β · ω, β · τ · τ ′ ] ∼ . Weget β ◦ δ < β ◦ δ . The only other case is β ◦ δ = [ β · ω, τ ] ∼ and β ◦ δ = [ β · ω, β · τ · τ ′ ] ∼ . f β ◦ δ = [ β · ω, τ ] ∼ , then play ( β ) * play ( τ ). Therefore β · τ · τ ′ ≈ β · ω τ · β · τ ′ and β ◦ δ = [ β · ω, β · τ · τ ′ ] ∼ = [ β · ω, τ · β · τ ′ ] ∼ , so β ◦ δ < β ◦ δ .(2) Let δ = [ ω, τ ] ∼ and δ = [ ω, τ · τ ′ ] ∼ . If β • δ = [ ω ′ , τ ] ∼ and β • δ = [ ω ′ , τ · τ ′ ] ∼ forsome ω ′ , then β • δ < β • δ .Let β be an output. If τ ≈ ω β · τ with τ , ǫ , then β • δ = [ ω · β, τ ] ∼ and β • δ = [ ω · β, τ · τ ′ ] ∼ .Therefore β • δ < β • δ . Let play ( β ) * play ( τ ) and τ · τ ′ ≈ ω β · τ with τ , ǫ . This implies β · τ ≈ ω β · τ · τ ′ for some τ ′ . It follows that τ ≈ ω · β τ · τ ′ . Then we get β • δ = [ ω · β, τ ] ∼ and β • δ = [ ω · β, τ ] ∼ = [ ω · β, τ · τ ′ ] ∼ , which imply β • δ < β • δ .Let β be an input. The proof is similar.(3) Let δ = [ ω, τ ] ∼ and δ = [ ω, τ ′ ] ∼ and τ @ p τ ′ @ p . We select some interestingcases.Note first that τ @ p τ ′ @ p implies p ∈ play ( τ ) ∩ play ( τ ′ ).If β is an output , then ω (cid:27) ω ′ · β . If both β · τ and β · τ ′ are ω ′ -pointed or not ω ′ -pointed,then the result is immediate. If β · τ is ω ′ -pointed while β · τ ′ is not ω ′ -pointed, then play ( β ) * play ( τ ′ ). This implies p < play ( β ). Similarly, if β is an input and play ( β ) ⊆ play ( τ )while play ( β ) * play ( τ ′ ), then p < play ( β ). In both cases we get ( β · τ ) @ p = τ @ p and( β · τ ′ ) @ p = τ ′ @ p , so we conclude β ◦ δ β ◦ δ .The next lemma shows that the operator ◦ builds g-events of a simple agt G k M fromg-events of the immediate subtypes of G composed in parallel with the queues given bythe input / output matching of Figure 3. Symmetrically, • builds g-events of agts whose sgtsare subtypes of G starting from g-events of G k M . Lemma 8.19 . (1) If δ ∈ GE ( G k M · h p , λ, q i ) , then pq ! λ ◦ δ ∈ GE ( pq ! ⊞ i ∈ I λ i ; G i k M ) where λ = λ k and G = G k for some k ∈ I. (2) If δ ∈ GE ( G k M ) , then pq ? λ ◦ δ ∈ GE ( pq ? λ ; G k h p , λ, q i · M ) . (3) If δ ∈ GE ( pq ! ⊞ i ∈ I λ i ; G i k M ) and pq ! λ k • δ is defined, then pq ! λ k • δ ∈ GE ( G k k M · h p , λ k , q i ) where k ∈ I. (4) If δ ∈ GE ( pq ? λ ; G k h p , λ, q i · M ) and pq ? λ • δ is defined, then pq ? λ • δ ∈ GE ( G k M ) .Proof. (1) By Definition 7.18(1), if δ ∈ GE ( G k M · h p , λ, q i ), then δ = ev ( ω · pq ! λ, τ ) where ω = otr ( M ) and τ ∈ Tr + ( G ). By Lemma 8.16(1) pq ! λ ◦ δ = ev ( ω, pq ! λ · τ ). Then, again byDefinition 7.18(1), pq ! λ ◦ δ ∈ GE ( pq ! ⊞ i ∈ I λ i ; G i k M ) where λ = λ k and G = G k for some k ∈ I , since pq ! λ k · τ ∈ Tr + ( pq ! ⊞ i ∈ I λ i ; G i ).(2) Similar to the proof of (1).(3) By Definition 7.18(1), if δ ∈ GE ( pq ! ⊞ i ∈ I λ i ; G i k M ), then δ = ev ( ω, τ ) where ω = otr ( M )and τ ∈ Tr + ( pq ! ⊞ i ∈ I λ i ; G i ), which gives τ ≈ ω pq ! λ h · τ h with τ h ∈ Tr + ( G h ) for some h ∈ I . Byhypothesis pq ! λ k • δ is defined, which implies τ ≈ ω pq ! λ k · τ k and τ k , ǫ . Then Lemma 8.16(2)gives pq ! λ k • δ = ev ( ω · pq ! λ k , τ k ). We conclude that pq ! λ k • δ ∈ GE ( G k k M · h p , λ k , q i ).(4) Similar to the proof of (3).Like ♦ and (cid:7) , the operators ◦ and • modify g-events in the same way as the transitionsin the LTS would do. This is formalised and proved in the following lemma, which issimilar to Lemma 8.7. Lemma 8.20 .
Let G k M β −→ G ′ k M ′ . Then otr ( M ) (cid:27) β ⊲ otr ( M ′ ) and (1) if δ ∈ GE ( G ′ k M ′ ) , then β ◦ δ ∈ GE ( G k M ) ; (2) if δ ∈ GE ( G k M ) and β • δ is defined, then β • δ ∈ GE ( G ′ k M ′ ) . roof. We first show otr ( M ) (cid:27) β ⊲ otr ( M ′ ). Let G k M β −→ G ′ k M ′ . If β = pq ! λ , thenby Lemma 6.12(1) M ′ ≡ M · h p , λ, q i . Thus otr ( M ′ ) (cid:27) otr ( M ) · β . If β = pq ? λ , then byLemma 6.12(2) M ≡ h p , λ, q i · M ′ . Thus otr ( M ) (cid:27) β · otr ( M ′ ). Therefore otr ( M ) (cid:27) β ⊲ otr ( M ′ ).(1) By induction on the inference of the transition G k M β −→ G ′ k M ′ , see Figure 5. Base Cases.
If the applied rule is [E xt -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and β = pq ! λ k and G ′ = G k and M ′ ≡ M · h p , λ k , q i for some k ∈ I . By Lemma 8.19(1) β ◦ δ ∈ GE ( G k M ).If the applied rule is [E xt -I n ], then G = pq ? λ ; G ′ and β = pq ? λ and M ≡ h p , λ, q i · M ′ . ByLemma 8.19(2) β ◦ δ ∈ GE ( G k M ). Inductive Cases.
If the last applied rule is [IC omm -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and G ′ = pq ! ⊞ i ∈ I λ i ; G ′ i and G i k M · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I and p < play ( β ).By Definition 7.18(1) δ ∈ GE ( G ′ k M ′ ) implies δ = ev ( ω, τ ) where ω = otr ( M ′ ) and τ ∈ Tr + ( G ′ ). Then τ = pq ! λ k · τ ′ and δ = [ ω, τ ] ∼ with τ = ( pq ! λ k · τ ′ ) ⌈ ω ǫ for some k ∈ I byDefinition 7.14. We get either τ ≈ ω pq ! λ k · τ ′ or p < play ( τ ) by Definition 7.13. Then pq ! λ k • δ is defined unless τ ≈ ω pq ! λ k · τ ′ and τ ′ = ǫ by Definition 8.13(2). We consider thetwo cases. Case τ ≈ ω pq ! λ k · τ ′ and τ ′ = ǫ . We get β ◦ δ = [ β ⊲ ω, pq ! λ k ] ∼ since p < play ( β ), whichimplies β ◦ δ ∈ GE ( G k M ) by Definition 7.18(1). Case τ ≈ ω pq ! λ k · τ ′ and τ ′ , ǫ or p < play ( τ ) . Let δ ′ = pq ! λ k • δ . By Lemma 8.19(3) δ ′ ∈ GE ( G ′ k k M ′ · h p , λ k , q i ). By induction β ◦ δ ′ ∈ GE ( G k k M · h p , λ k , q i ). Since δ ′ isdefined, Lemma 8.17(2) implies pq ! λ k ◦ δ ′ = δ . Since β ◦ δ ′ and pq ! λ k ◦ δ ′ are defined,by Lemma 8.17(3) and p < play ( β ) we get pq ! λ k ◦ ( β ◦ δ ′ ) = β ◦ ( pq ! λ k ◦ δ ′ ) = β ◦ δ . ByLemma 8.19(1) pq ! λ k ◦ ( β ◦ δ ′ ) ∈ GE ( G k M ). We conclude that β ◦ δ ∈ GE ( G k M ).If the last applied rule is [IC omm -I n ] the proof is similar.(2) By induction on the inference of the transition G k M β −→ G ′ k M ′ , see Figure 5. Base Cases.
If the applied rule is [E xt -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and β = pq ! λ k and G ′ = G k and M ′ ≡ M · h p , λ k , q i for some k ∈ I . By assumption β • δ is defined. By Lemma 8.19(3) β • δ ∈ GE ( G ′ k M ′ ).If the applied rule is [E xt -I n ], then G = pq ? λ ; G ′ and β = pq ? λ and M ≡ h p , λ, q i · M ′ . Byassumption β • δ is defined. By Lemma 8.19(4) β • δ ∈ GE ( G ′ k M ′ ). Inductive Cases.
If the last applied rule is [IC omm -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and G ′ = pq ! ⊞ i ∈ I λ i ; G ′ i and G i k M · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I and p < play ( β ).By Definition 7.18(1) δ ∈ GE ( G k M ) implies δ = ev ( ω, τ ) where ω = otr ( M ) and τ ∈ Tr + ( G ).Then τ = pq ! λ k · τ ′ and δ = [ ω, τ ] ∼ with τ = ( pq ! λ k · τ ′ ) ⌈ ω ǫ for some k ∈ I by Definition 7.14.We get either τ ≈ ω pq ! λ k · τ ′ or p < play ( τ ) by Definition 7.13. Then pq ! λ k • δ is definedunless τ ≈ ω pq ! λ k · τ ′ and τ ′ = ǫ by Definition 8.13(2). We consider the two cases. Case τ ≈ ω pq ! λ k · τ ′ and τ ′ = ǫ . We get β • δ = [ β ◮ ω, pq ! λ k ] ∼ since play ( β ) ∩ play ( pq ! λ k ) = ∅ ,which implies β • δ ∈ GE ( G ′ k M ′ ) by Definition 7.18(1). Case τ ≈ ω pq ! λ k · τ ′ and τ ′ , ǫ or p < play ( τ ) . Let δ ′ = pq ! λ k • δ . By Lemma 8.19(3) δ ′ ∈ GE ( G k k M · h p , λ k , q i ). By assumption β • δ is defined. We first show that β • δ ′ isdefined. Since β • δ and pq ! λ k • δ are defined, by Definition 8.13(2) we have four cases:(a) τ ≈ ω β · τ for some τ and τ ≈ ω pq ! λ k · τ ′ ;(b) τ ≈ ω β · τ and p < play ( τ );(c) play ( β ) ∩ play ( τ ) = ∅ and τ ≈ ω pq ! λ k · τ ′ ;(d) play ( β ) ∩ play ( τ ) = ∅ and p < play ( τ ). et ω ′ = pq ! λ k ◮ ω = ω · pq ! λ k and ω ′′ = β ◮ ω ′ .In Case (a) we have τ ≈ ω β · pq ! λ k · τ ′ ≈ ω pq ! λ k · β · τ ′ for some τ ′ . Let τ = β · τ ′ . Then δ = [ ω, pq ! λ k · τ ] ∼ and therefore δ ′ = [ ω ′ , τ ] ∼ = [ ω ′ , β · τ ′ ] ∼ . Hence β • δ ′ = [ ω ′′ , τ ′ ] ∼ .In Case (b) we have δ = [ ω, β · τ ] ∼ and p < play ( β · τ ). Therefore δ ′ = [ ω ′ , β · τ ] ∼ . Hence β • δ ′ = [ ω ′′ , τ ] ∼ .In Case (c) we have δ ′ = [ ω ′ , τ ′ ] ∼ and β • δ ′ = [ ω ′′ , τ ′ ] ∼ since play ( β ) ∩ play ( τ ) = ∅ implies play ( β ) ∩ play ( τ ′ ) = ∅ .In Case (d) we have δ ′ = [ ω ′ , τ ] ∼ and β • δ ′ = [ ω ′′ , τ ] ∼ .So in all cases we conclude that β • δ ′ is defined.By induction β • δ ′ ∈ GE ( G ′ k k M ′ · h p , λ k , q i ). By Lemma 8.19(1) pq ! λ k ◦ ( β • δ ′ ) ∈ GE ( G ′ k M ′ ).Since δ ′ is defined, Lemma 8.17(2) implies pq ! λ k ◦ δ ′ = δ . Since β • δ ′ and β • ( pq ! λ k ◦ δ ′ ) aredefined and p < play ( β ), by Lemma 8.17(4) we get pq ! λ k ◦ ( β • δ ′ ) = β • ( pq ! λ k ◦ δ ′ ) = β • δ .We conclude that β • δ ∈ GE ( G ′ k M ′ ).If the last applied rule is [IC omm -I n ] the proof is similar.The function gec , which builds a sequence of g-events corresponding to a pair ( ω, τ ),is simply defined applying the function ev to ω and to prefixes of τ . Definition 8.21 (Global events from pairs of o-traces and traces) . Let τ , ǫ be ω -well formed.We define the sequence of global events corresponding to ω and τ by gec ( ω, τ ) = δ ; · · · ; δ n where δ i = ev ( ω, τ [1 ... i ]) for all i, ≤ i ≤ n. The following lemma establishes the soundness of the above definition.
Lemma 8.22 . If τ , ǫ is ω -well formed, then: (1) τ [1 ... i ] is ω -well formed for all i, ≤ i ≤ n; (2) ev ( ω, τ [1 ... i ]) is defined and i / o ( ev ( ω, τ [1 ... i ])) = τ [ i ] for all i, ≤ i ≤ n.Proof. The proof of (1) is immediate since by Definitions 7.1 and 7.2 every prefix of an ω -well formed trace is ω -well formed. Fact (2) follows from Fact (1), Definition 7.14 andLemma 7.15.As for the function nec (Lemma 8.9), the g-events in a sequence generated by gec arenot in conflict, and we can retrieve τ from gec ( ω, τ ) by using the function i / o given inDefinition 7.12(2). Lemma 8.23 .
Let τ , ǫ be ω -well formed and gec ( ω, τ ) = δ ; · · · ; δ n . (1) If ≤ k , l ≤ n, then ¬ ( δ k δ l ) ; (2) τ [ i ] = i / o ( δ i ) for all i, ≤ i ≤ n.Proof. (1) Let δ i = [ ω, τ i ] ∼ for all i , 1 ≤ i ≤ n . By Definitions 8.21 and 7.14 τ i = τ [1 ... i ] ⌈ ω ǫ .By Definition 7.13 if τ i @ p , ǫ , then there are k i ≤ i and τ ′ i such that play ( τ [ k i ]) = { p } , p < play ( τ ′ i ) and τ i = τ [1 ... k i − ⌈ ω τ [ k i ] · τ ′ i . By the same definition all τ [ j ] with j ≤ k i and play ( τ [ j ]) = { p } occur in τ i , in the same order as in τ . Theferore τ i @ p is a prefix of τ @ p forall p and all i , 1 ≤ i ≤ n . This implies that τ h @ p cannot be in conflict with τ l @ p for any p and any h , l , 1 ≤ h , l ≤ n .(2) Immediate from Definitions 8.21, 7.14 and Lemma 7.15. he following lemma, together with Lemma 8.22, ensures that gec ( ω, τ ) is definedwhen G k M τ −→ G ′ k M ′ and ω = otr ( M ). Lemma 8.24 . If G k M τ −→ G ′ k M ′ and ω = otr ( M ) , then τ is ω -well formed.Proof. The proof is by induction on τ . Case τ = β . If β = pq ! λ , then the result is immediate.If β = pq ? λ , then from G k M β −→ G ′ k M ′ we get M ≡ h p , λ, q i · M ′ by Lemma 6.12(2), whichimplies ω (cid:27) pq ! λ · ω ′ . Then the trace ω · β = pq ! λ · ω ′ · pq ? λ is well formed, since pq ? λ isthe first input of q from p and pq ! λ is the first output of p to q , and therefore 1 ∝ ω · β | ω | + β is ω -well formed. Case τ = β · τ ′ with τ ′ , ǫ . Let G k M β −→ G ′′ k M ′′ τ ′ −→ G ′ k M ′ and ω ′ = otr ( M ′′ ).By induction τ ′ is ω ′ -well formed. If β = pq ! λ , then from G k M β −→ G ′′ k M ′′ we get M ′′ = M · h p , λ, q i by Lemma 6.12(1). Therefore otr ( M ′′ ) = ω · β = ω ′ . Since τ ′ is ( ω · β )-wellformed, i.e. ω · β · τ ′ is well formed, we may conclude that τ = β · τ ′ is ω -well formed.If β = pq ? λ , as in the base case we get M ≡ h p , λ, q i · M ′′ by Lemma 6.12(2), and thus ω = pq ! λ · ω ′ . We know that τ ′ is ω ′ -well formed, i.e. ω ′ · τ ′ is well formed. Therefore wehave that pq ! λ · ω ′ · pq ? λ · τ ′ is well formed, since 1 ∝ ω · τ | ω | +
1, and we may conclude that τ is ω -well formed.The following lemma mirrors Lemma 8.10. Lemma 8.25 . (1)
Let τ = β · τ ′ and ω = β ⊲ ω ′ . If gec ( ω, τ ) = δ ; · · · ; δ n and gec ( ω ′ , τ ′ ) = δ ′ ; · · · ; δ ′ n , then β ◦ δ ′ i = δ i for all i, ≤ i ≤ n. (2) Let τ = β · τ ′ and ω ′ = β ◮ ω . If gec ( ω, τ ) = δ ; · · · ; δ n and gec ( ω ′ , τ ′ ) = δ ′ ; · · · ; δ ′ n , then β • δ i = δ ′ i for all i, ≤ i ≤ n.Proof. (1) By Definition 8.21 δ i = ev ( ω, β · τ ′ [1 ... i ]) and δ ′ i = ev ( ω ′ , τ ′ [1 ... i ]) for all i , 2 ≤ i ≤ n .Then by Lemma 8.16(1) β ◦ δ ′ i = δ i for all i , 2 ≤ i ≤ n .(2) By Fact (1) and Lemma 8.17(1).We end this subsection with the two theorems for simple agts discussed at the beginningof the whole section, which relate the transition sequences of a simple agt with the provingsequences of the associated PES. Theorem 8.26 . If G k M τ −→ G ′ k M ′ , then gec ( otr ( M ) , τ ) is a proving sequence in S G ( G k M ) .Proof. Let ω = otr ( M ). By Lemma 8.24 τ is ω -well formed. Then by Lemma 8.22 gec ( ω, τ )is defined and by Definition 8.21 gec ( ω, τ ) = δ ; · · · ; δ n , where δ i = ev ( ω, τ [1 ... i ]) for all i ,1 ≤ i ≤ n . We proceed by induction on τ . Case τ = β . In this case, gec ( ω, β ) = δ = ev ( ω, β ). By Definition 7.14 we have ev ( ω, β ) = [ ω, β ⌈ ω ǫ ] ∼ . By Definition 7.13 [ ω, β ⌈ ω ǫ ] ∼ = [ ω, β ] ∼ since β is ω -well formed.We use now a further induction on the inference of the transition G k M β −→ G ′ k M ′ , seeFigure 5. Base Cases.
The rule applied is [E xt -O ut ] or [E xt -I n ]. Therefore β ∈ Tr + ( G ). By Defini-tion 7.18(1) this implies ev ( ω, β ) ∈ GE ( G k M ). Inductive Cases.
If the last applied Rule is [IC omm -O ut ], then G = pq ! ⊞ i ∈ I λ i ; G i and G ′ = pq ! ⊞ i ∈ I λ i ; G ′ i and G i k M · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I and p < play ( β ). We have tr ( M · h p , λ i , q i ) = ω · pq ! λ i . By induction we get gec ( ω · pq ! λ i , β ) = δ ′ i = [ ω · pq ! λ i , β ] ∼ ∈GE ( G i k M · h p , λ i , q i ). By Lemma 8.19(1) pq ! λ i ◦ δ ′ i ∈ GE ( G k M ). Now, from p < play ( β ) itfollows that pq ! λ i is not a local cause of β , namely ¬ ( req (1 , pq ! λ i · β )). From Lemma 8.24 β is ω -well-formed. So, if β is an input, its matched output must be in ω . Hence pq ! λ i is not across-cause of β , namely ¬ (1 + | ω | ∝ ω · pq ! λ i · β + | ω | ). Therefore pq ! λ i · β is not ω -pointed. ByDefinition 8.13(1) we get pq ! λ i ◦ δ ′ i = [ ω, β ] ∼ = δ . We conclude again that δ ∈ GE ( G k M )and clearly δ is a proving sequence in S G ( G k M ) since β has no proper prefix.If the last applied Rule is [IC omm -I n ] the proof is similar. Case τ = β · τ ′ with τ ′ , ǫ . From G k M τ −→ G ′ k M ′ we get G k M β −→ G ′′ k M ′′ τ ′ −→ G ′ k M ′ for some G ′′ , M ′′ . Let ω ′ = otr ( M ′′ ). By Lemma 8.24 τ ′ is ω ′ -well formed. Thus gec ( ω ′ , τ ′ )is defined by Lemma 8.22. Let gec ( ω ′ , τ ′ ) = δ ′ ; · · · ; δ ′ n . By induction gec ( ω ′ , τ ′ ) is a provingsequence in S G ( G ′′ k M ′′ ). By Lemma 8.25(1) δ j = β ◦ δ ′ j for all j , 2 ≤ j ≤ n . By Lemma 8.20(1)this implies δ j ∈ GE ( G k M ) for all j , 2 ≤ j ≤ n . From the proof of the base case we knowthat δ = [ ω, β ] ∼ ∈ GE ( G k M ). What is left to show is that gec ( ω, τ ) is a proving sequencein S G ( G k M ). By Lemma 8.23(1) no two events in this sequence can be in conflict.Let δ ∈ GE ( G k M ) and δ < δ k for some k , 1 ≤ k ≤ n . Note that this implies j >
1. If β • δ is undefined, then by Definition 8.13(2) either δ = δ or δ = [ ω, τ ] ∼ with τ ω β · τ ′ and play ( β ) ⊆ play ( τ ). In the first case we are done. In the second case τ @ play ( β ) β @ play ( β ) ,which implies δ δ . Since δ < δ k and conflict is hereditary, it follows that δ δ k , whichcontradicts what said above. Hence this second case is not possible. If β • δ is defined, byLemma 8.20(2) β • δ ∈ GE ( G ′′ k M ′′ ) and by Lemma 8.18(2) β • δ < β • δ k . Let δ ′ = β • δ .By Lemma 8.25(2) β • δ j = δ ′ j for all j , 2 ≤ j ≤ n . Thus we have δ ′ < δ ′ k . Since gec ( ω ′ , τ ′ ) isa proving sequence in S G ( G ′′ k M ′′ ), by Definition 3.6 there is h < k such that δ ′ = δ ′ h . ByLemma 8.17(2) we derive δ = β ◦ δ ′ = β ◦ δ ′ h = δ h . Theorem 8.27 . If δ ; . . . ; δ n is a proving sequence in S G ( G k M ) , then G k M τ −→ G ′ k M ′ where τ = i / o ( δ ) · . . . · i / o ( δ n ) .Proof. The proof is by induction on the length n of the proving sequence. Let ω = otr ( M ). Case n = . Let i / o ( δ ) = β . Since δ is the first event of a proving sequence, it can have nocauses, so it must be δ = [ ω, β ] ∼ . We show this case by induction on d = depth ( G , play ( β )). Case d = . If β = pq ! λ we have G = pq ! ⊞ i ∈ I λ i ; G i with λ k = λ for some k ∈ I . We deduce G k M β −→ G k k M · h p , λ, q i by applying Rule [E xt -O ut ]. If β = pq ? λ we have G = pq ? λ ; G ′ .Since G k M is well formed, by Rule [I n ] of Figure 3 we get M ≡ h p , λ, q i · M ′ . We deduce G k M β −→ G ′ k M ′ by applying Rule [E xt -I n ]. Case d > . We are in one of the two situations:(1) G = rs ! ⊞ i ∈ I λ i ; G i with r < play ( β );(2) G = rs ? λ ′ ; G ′′ with s < play ( β ).In Case (1), r < play ( β ) implies that rs ! λ i • δ is defined for all i ∈ I by Definition 8.13(2).By Lemma 8.19(3) rs ! λ i • δ ∈ GE ( G i k M · h p , λ i , q i ) for all i ∈ I . Lemma 6.4(1) implies depth ( G , play ( β )) > depth ( G i , play ( β )) for all i ∈ I . By induction hypothesis we have G i kM · h p , λ i , q i β −→ G ′ i k M ′ · h p , λ i , q i for all i ∈ I . Then we may apply Rule [IC omm -O ut ] todeduce rs ! ⊞ i ∈ I λ i ; G i k M β −→ rs ! ⊞ i ∈ I λ i ; G ′ i k M ′ n Case (2), since G k M is well formed we get M ≡ h r , λ ′ , s i · M ′′ by Rule [I n ] of Figure 3.Hence ω (cid:27) rs ! λ ′ · ω ′ . This and s < play ( β ) imply that rs ? λ ′ • δ is defined by Definition 8.13(2).By Lemma 8.19(4) rs ? λ ′ • δ ∈ GE ( G ′′ k M ′′ ). Lemma 6.4(2) gives depth ( G , play ( β )) > depth ( G ′′ , play ( β )). By induction hypothesis G ′′ k M ′′ β −→ G ′′′ k M ′′′ . Then we may applyRule [IC omm -I n ] to deduce rs ? λ ′ ; G ′′ k h r , λ ′ , s i · M ′′ β −→ rs ? λ ′ ; G ′′′ k h r , λ ′ , s i · M ′′′ Case n > . Let i / o ( δ ) = β , and G k M β −→ G ′′ k M ′′ be the corresponding transition asobtained from the base case. We show that β • δ j is defined for all j , 2 ≤ j ≤ n . If β • δ k wereundefined for some k , 2 ≤ k ≤ n , then by Definition 8.13(2) either δ k = δ or δ k = [ ω, τ ] ∼ with τ ω β · τ ′ and play ( β ) ⊆ play ( τ ). In the second case β @ play ( β ) τ @ play ( β ) , which implies δ k δ . So both cases are impossible. If β • δ j is defined, by Lemma 8.20(2) we may define δ ′ j = β • δ j ∈ GE ( G ′′ k M ′′ ) for all j , 2 ≤ j ≤ n . We show that δ ′ ; · · · ; δ ′ n is a proving sequencein S G ( G ′′ k M ′′ ). By Lemma 8.17(2) δ j = β ◦ δ ′ j for all j , 2 ≤ j ≤ n . Then by Lemma 8.18(3)no two events in this sequence can be in conflict.Let δ ∈ GE ( G ′′ k M ′′ ) and δ < δ ′ h for some h , 2 ≤ h ≤ n . By Lemma 8.20(1) β ◦ δ and β ◦ δ ′ h belong to GE ( G k M ). By Lemma 8.18(1) β ◦ δ < β ◦ δ ′ h . By Lemma 8.17(2) β ◦ δ ′ h = δ h . Let δ ′ = β ◦ δ . Then δ ′ < δ h implies, by Definition 3.6 and the fact that S G ( G k M ) is a PES, thatthere is l < h such that δ ′ = δ l . By Lemma 8.17(1) we get δ = β • δ ′ = β • δ l = δ ′ l .We have shown that δ ′ ; · · · ; δ ′ n is a proving sequence in S G ( G ′′ k M ′′ ). By induction G ′′ k M ′′ τ ′ −→ G ′ k M ′ where τ ′ = i / o ( δ ′ ) · . . . · i / o ( δ ′ n ). Let τ = i / o ( δ ) · . . . · i / o ( δ n ). Since i / o ( δ ′ j ) = i / o ( δ j ) for all j , ≤ j ≤ n , we have τ = β · τ ′ . Hence G k M β −→ G ′′ k M ′′ τ ′ −→ G ′ k M ′ is the required transition sequence.8.3. Isomorphism.
We are finally able to show that the ES interpretation of a network is equivalent, whenthe session is typable, to the ES interpretation of its simple agt.To prove our main theorem, we will also use the following separation result from [BC91](Lemma 2.8 p. 12). Recall from Section 3 that C ( S ) denotes the set of configurations of S . Lemma 8.28 (Separation [BC91]) . Let S = ( E , ≺ , be a flow event structure and X , X ′ ∈ C ( S ) be such that X ⊂ X ′ . Then there exist e ∈ X ′ \X such that X ∪ { e } ∈ C ( S ) . We may now establish the isomorphism between the domain of configurations of theFES of a typable network and the domain of configurations of the PES of its simple agt. In theproof of this result, we will use the characterisation of configurations as proving sequences,as given in Proposition 3.7. We will also take the freedom of writing ρ ; · · · ; ρ n ∈ C ( S N ( N kM )) to mean that ρ ; · · · ; ρ n is a proving sequence such that { ρ , . . . , ρ n } ∈ C ( S N ( N k M )),and similarly for δ ; · · · ; δ n ∈ C ( S G ( G k M )). Theorem 8.29 . If ⊢ N k M : G k M , then D ( S N ( N k M )) ≃ D ( S G ( G k M )) .Proof. Let ω = otr ( M ). We start by constructing a bijection between the proving sequencesof S N ( N k M ) and the proving sequences of S G ( G k M ). By Theorem 8.12, if ρ ; · · · ; ρ n ∈C ( S N ( N k M )), then N k M τ −→ N ′ k M ′ where τ = i / o ( ρ ) · · · · · i / o ( ρ n ). By applying teratively Subject Reduction (Theorem 6.18), we obtain G k M τ −→ G ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ By Theorem 8.26, we get gec ( ω, τ ) ∈ C ( S G ( G k M )).By Theorem 8.27, if δ ; · · · ; δ n ∈ C ( S G ( G k M )), then G k M τ −→ G ′ k M ′ , where τ = i / o ( δ ) · · · i / o ( δ n ). By applying iteratively Session Fidelity (Theorem 6.19), we obtain N k M τ −→ N ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ By Theorem 8.11, we get nec ( ω, τ ) ∈ C ( S N ( N k M )).Therefore we have a bijection between D ( S N ( N k M )) and D ( S G ( G k M )), given by nec ( ω, τ ) ↔ gec ( ω, τ ) for any τ generated by the (bisimilar) LTSs of N k M and G k M .We now show that this bijection preserves inclusion of configurations. By Lemma 8.28 it isenough to prove that if ρ ; · · · ; ρ n ∈ C ( S N ( N k M )) is mapped to δ ; · · · ; δ n ∈ C ( S G ( G k M )),then ρ ; · · · ; ρ n ; ρ ∈ C ( S N ( N k M )) i ff δ ; · · · ; δ n ; δ ∈ C ( S G ( G k M )), where δ ; · · · ; δ n ; δ isthe image of ρ ; · · · ; ρ n ; ρ under the bijection. So, suppose ρ ; · · · ; ρ n ∈ C ( S N ( N k M )) and δ ; · · · ; δ n ∈ C ( S G ( G k M )) are such that ρ ; · · · ; ρ n = nec ( ω, τ ) ↔ gec ( ω, τ ) = δ ; · · · ; δ n Then i / o ( ρ ) · · · i / o ( ρ n ) = τ = i / o ( δ ) · · · i / o ( δ n ).By Theorem 8.12, if ρ ; · · · ; ρ n ; ρ ∈ C ( S N ( N k M )) with i / o ( ρ ) = β , then N k M τ · β −−→ N ′ k M ′ By applying iteratively Subject Reduction (Theorem 6.18) we get G k M τ · β −−→ G ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ We conclude that gec ( ω, τ · β ) ∈ C ( S G ( G k M )) by Theorem 8.26.By Theorem 8.27, if δ ; · · · ; δ n ; δ ∈ C ( S G ( G k M )) with i / o ( δ ) = β , then G k M τ · β −−→ G ′ k M ′ By applying iteratively Session Fidelity (Theorem 6.19) we get N k M τ · β −−→ N ′ k M ′ and ⊢ N ′ k M ′ : G ′ k M ′ We conclude that nec ( ω, τ · β ) ∈ C ( S N ( N k M )) by Theorem 8.11.9. R elated W ork and C onclusions Session types, as originally proposed in [HVK98, HYC16] for binary sessions, are groundedon types for the π -calculus. Early proposals for typing channels in the π -calculus includesimple sorts [Mil91], input / output types [PS96] and usage types [Kob05]. In particular,the notion of progress for multiparty sessions [DY11, CDCYP16] is inspired by the notionof lock-freedom as developed for the π -calculus in [Kob02, Kob06]. The more recentwork [DGS12] provides further evidence of the strong relationship between binary sessiontypes and channel types in the linear π -calculus. The notion of lock-freedom for the linear π -calculus was also revisited in [Pad15].Multiparty sessions disciplined by global types were introduced in the keystone pa-pers [HYC08, HYC16]. These papers, as well as most subsequent work on multipartysession types (for a survey see [HLV + airs of labels and values. In that more general setting, global types are projected onto ses-sion types and in turn session types are assigned to processes. Here, instead, we consideronly single sessions and pure label exchange: this allows us to project global types directlyto processes, as in [SDC19], where the considered global types are those of [HYC16]. Wechose to concentrate on this very simple calculus, as our working plan was already quitechallenging. A discussion on possible extensions of our work to more expressive calculimay be found at the end of this section.Standard global types are too restrictive for typing processes which communicateasynchronously. A powerful typability extension is obtained by the use of the subtypingrelation given in [MYH09]. This subtyping allows inputs and outputs to be exchanged,stating that anticipating outputs is better. The rationale is that outputs are not blocking,while inputs are blocking in asynchronous communication. Unfortunately, this subtypingis undecidable [BCZ17, LY17], and thus type systems equipped with this subtyping arenot e ff ective. Decidable restrictions of this subtyping relation have been proposed [BCZ17,LY17, BCZ18]. In particular, subtyping is decidable when both internal and external choicesare forbidden in one of the two compared processes [BCZ17]. This result is improvedin [BCZ18], where both the subtype and the supertype can contain either internal orexternal choices. More interestingly, the work [BCL +
19] presents a sound (though notcomplete) algorithm for checking asynchronous subtyping as defined in [CDSY17]. A veryelegant formulation of asynchronous subtyping is given in [GPP + π -calculus [CVY07, VY10, CVY12, Cri15, CKV15, CKV16]. We refer the reader to ourcompanion paper [CDG19a] for a more extensive discussion on ES semantics for processcalculi.It is noteworthy that all the above-mentioned ES semantics were given for calculiwith synchronous communication. This is perhaps not surprising since ESs are generallyequipped with a synchronisation algebra when modelling process calculi, and a communica-tion is represented by a single event resulting from the synchronisation of two events. Thisis also the reason why, in our previous paper [CDG19a], we started by considering an ESsemantics for a synchronous session calculus with standard global types. In the presentpaper, instead, as in [LMT20], a communication is represented by two distinct events inthe ES, one for the output and the other for the matching input. Defining the “matching”relation on events, be they events of the FES of a network or events of the PES of an agt, isnot entirely trivial. hile agts are interpreted as PESs, the simplest kind of ES, networks are interpretedas FESs, a subclass of Winskel’s Stable Event Structures [Win88] that allows for disjunctivecausality and therefore provides a more compact representation of networks in the presenceof forking computations. Such feature of FESs may be seen at work in Example 5.11, wherethe event of participant r has two conflicting causes. In a PES this event would have to bereplaced by two distinct events, one for each of its two global histories (this is indeed whathappens in the PES of the associated agt).This work builds on the companion paper [CDG19a], where synchronous rather thanasynchronous communication was considered. In that paper too, networks were inter-preted as FESs, and global types, which were the standard ones, were interpreted as PESs.The key result was again an isomorphism between the configuration domain of the FES of atyped network and that of the PES of its global type. Thus, the present paper completes thepicture set up in [CDG19a] by exploring the “asynchronous side” of the same construction.As regards future work, we already sketched some possible directions in [CDG19a],including the investigation of reversibility, which would benefit from previous work onreversible session calculi [TY14, TY16, MP17a, MP17b, NY17, CDG19b] and reversible EventStructures [PU15, CKV16, GPY17, GPY18]. We also plan to investigate the extension ofour asynchronous calculus with delegation. While delegation is usually defined in sessioncalculi with channels, and modelled using the channel passing mechanism of the π -calculus,an alternative notion of delegation for a session calculus without channels, called “internaldelegation”, was proposed in [CDGH20]. Hence, one possibility would be to investigateinternal delegation in our asynchronous calculus without channels; another option wouldbe to depart from our present calculus, admittedly very simple, and investigate the classicalnotion of delegation in an asynchronous calculus with channels. Note that delegationremains essentially a synchronous mechanism, even in the asynchronous setting: indeed,unlike ordinary outputs that become non-blocking, delegation remains blocking for theprincipal, who has to wait until the deputy returns the delegation to be able to proceed.As a matter of fact, this is quite reasonable: not only does it prevent the issue of “powervacancy” that would arise if the role of the principal disappeared from the network forsome time, but it also seems natural to assume that the principal delegates a task only whenshe has the guarantee that the deputy will accept it, and that both of them reside in thesame locality (where communication may be assumed to be synchronous).Acknowledgment. We are indebted to Francesco Dagnino for suggesting a simplificationin the definition of input / output matching for asynchronous global types.R eferences [ABB +
16] Davide Ancona, Viviana Bono, Mario Bravetti, Joana Campos, Giuseppe Castagna, Pierre-MaloDeniélou, Simon J. Gay, Nils Gesbert, Elena Giachino, Raymond Hu, Einar Broch Johnsen, Fran-cisco Martins, Viviana Mascardi, Fabrizio Montesi, Rumyana Neykova, Nicholas Ng, LucaPadovani, Vasco T. Vasconcelos, and Nobuko Yoshida. Behavioral types in programming lan-guages.
Foundations and Trends in Programming Languages , 3(2-3):95–230, 2016.[BC87] Gérard Boudol and Ilaria Castellani. On the semantics of concurrency: partial orders and transi-tion systems. In Hartmut Ehrig, Robert A. Kowalski, Giorgio Levi, and Ugo Montanari, editors,
TAPSOFT , volume 249 of
LNCS , pages 123–137. Springer, 1987.[BC88] Gérard Boudol and Ilaria Castellani. Permutation of transitions: an event structure semantics forCCS and SCCS. In Jaco W. de Bakker, Willem P. de Roever, and Grzegorz Rozenberg, editors,
REX: inear Time, Branching Time and Partial Order in Logics and Models for Concurrency , volume 354 of LNCS , pages 411–427. Springer, 1988.[BC91] Gérard Boudol and Ilaria Castellani. Flow models of distributed computations: event structuresand nets. Research Report 1482, INRIA, 1991.[BC94] Gérard Boudol and Ilaria Castellani. Flow models of distributed computations: three equivalentsemantics for CCS.
Information and Computation , 114(2):247–314, 1994.[BCL +
19] Mario Bravetti, Marco Carbone, Julien Lange, Nobuko Yoshida, and Gianluigi Zavattaro. A soundalgorithm for asynchronous session subtyping. In Wan J. Fokkink and Rob van Glabbeek, editors,
CONCUR , volume 140 of
LIPIcs , pages 38:1–38:16. Schloss Dagstuhl - Leibniz-Zentrum für Infor-matik, 2019.[BCZ17] Mario Bravetti, Marco Carbone, and Gianluigi Zavattaro. Undecidability of asynchronous sessionsubtyping.
Information and Computation , 256:300–320, 2017.[BCZ18] Mario Bravetti, Marco Carbone, and Gianluigi Zavattaro. On the boundary between decidabilityand undecidability of asynchronous session subtyping.
Theoretical Computer Science , 722:19–51,2018.[BHR84] S. Brookes, C.A.R. Hoare, and A. Roscoe. A theory of communicating sequential processes.
Journalof ACM , 31(3):560–599, 1984.[BM94] Christel Baier and Mila E. Majster-Cederbaum. The connection between an event structure seman-tics and an operational semantics for TCSP.
Acta Informatica , 31(1):81–104, 1994.[CC13] Felice Cardone and Mario Coppo. Recursive types. In Henk Barendregt, Wil Dekkers, and RichardStatman, editors,
Lambda Calculus with Types , Perspectives in Logic, pages 377–576. CambridgeUniversity Press, 2013.[CDCYP16] Mario Coppo, Mariangiola Dezani-Ciancaglini, Nobuko Yoshida, and Luca Padovani. Globalprogress for dynamically interleaved multiparty sessions.
Mathematical Structures in ComputerScience , 26(2):238–302, 2016.[CDG19a] Ilaria Castellani, Mariangiola Dezani-Ciancaglini, and Paola Giannini. Event structure semanticsfor multiparty sessions. In Michele Boreale, Flavio Corradini, Michele Loreti, and Rosario Pugliese,editors,
Models, Languages, and Tools for Concurrent and Distributed Programming - Essays Dedicatedto Rocco De Nicola on the Occasion of His 65th Birthday , volume 11665 of
LNCS , pages 340–363.Springer, 2019.[CDG19b] Ilaria Castellani, Mariangiola Dezani-Ciancaglini, and Paola Giannini. Reversible sessions withflexible choices.
Acta Informatica , 56(7):553–583, 2019.[CDGH20] Ilaria Castellani, Mariangiola Dezani-Ciancaglini, Paola Giannini, and Ross Horne. Global Typeswith Internal Delegation.
Theoretical Computer Science , 807:128–153, 2020.[CDSY17] Tzu-Chun Chen, Mariangiola Dezani-Ciancaglini, Alceste Scalas, and Nobuko Yoshida. On thepreciseness of subtyping in session types.
Logical Methods in Computer Science , 13(2), 2017.[CKV15] Ioana Cristescu, Jean Krivine, and Daniele Varacca. Rigid families for CCS and the π -calculus.In Martin Leucker, Camilo Rueda, and Frank D. Valencia, editors, ICTAC , volume 9399 of
LNCS ,pages 223–240. Springer, 2015.[CKV16] Ioana Cristescu, Jean Krivine, and Daniele Varacca. Rigid families for the reversible π -calculus.In Simon J. Devitt and Ivan Lanese, editors, Reversible Computation , volume 9720 of
LNCS , pages3–19. Springer, 2016.[Cou83] Bruno Courcelle. Fundamental properties of infinite trees.
Theoretical Computer Science , 25:95–169,1983.[CP10] Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In Paul Gastinand François Laroussinie, editors,
CONCUR , volume 6269 of
LNCS , pages 222–236. Springer, 2010.[CPT16] Luís Caires, Frank Pfenning, and Bernardo Toninho. Linear logic propositions as session types.
Mathematical Structures in Computer Science , 26(3):367–423, 2016.[Cri15] Ioana Cristescu.
Operational and denotational semantics for the reversible π -calculus . PhD thesis, Uni-versity Paris Diderot - Paris 7, 2015.[CVY07] Silvia Crafa, Daniele Varacca, and Nobuko Yoshida. Compositional event structure semantics forthe internal π -calculus. In Luís Caires and Vasco T. Vasconcelos, editors, CONCUR , volume 4703of
LNCS , pages 317–332. Springer, 2007. CVY12] Silvia Crafa, Daniele Varacca, and Nobuko Yoshida. Event structure semantics of parallel extru-sion in the π -calculus. In Lars Birkedal, editor, FOSSACS , volume 7213 of
LNCS , pages 225–239.Springer, 2012.[CZ97] Ilaria Castellani and Guo Qiang Zhang. Parallel product of event structures.
Theoretical ComputerScience , 179(1-2):203–215, 1997.[DCGJ +
16] Mariangiola Dezani-Ciancaglini, Silvia Ghilezan, Svetlana Jaksic, Jovanka Pantovic, and NobukoYoshida. Precise subtyping for synchronous multiparty sessions. In
PLACES , volume 203 of
EPTCS ,pages 29 – 44. Open Publishing Association, 2016.[DDM90] Pierpaolo Degano, Rocco De Nicola, and Ugo Montanari. A partial ordering semantics for CCS.
Theoretical Computer Science , 75(3):223–262, 1990.[DDNM88] Pierpaolo Degano, Rocco De Nicola, and Ugo Montanari. On the consistency of truly concurrentoperational and denotational semantics. In Ashok K. Chandra, editor,
LICS , pages 133–141. IEEEComputer Society Press Press, 1988.[DGS12] Ornela Dardha, Elena Giachino, and Davide Sangiorgi. Session types revisited. In Danny DeSchreye, Gerda Janssens, and Andy King, editors,
PPDP , pages 139–150. ACM, 2012.[DY11] Pierre-Malo Deniélou and Nobuko Yoshida. Dynamic multirole session types. In Mooly SagivThomas Ball, editor,
POPL , pages 435–446. ACM Press, 2011.[DY12] Pierre-Malo Deniélou and Nobuko Yoshida. Multiparty session types meet communicating au-tomata. In Helmut Seidl, editor,
ESOP , volume 7211 of
LNCS , pages 194–213. Springer, 2012.[GG04] Rob J. van Glabbeek and Ursula Goltz. Well-behaved flow event structures for parallel compositionand action refinement.
Theoretical Computer Science , 311(1-3):463–478, 2004.[GPP +
21] Silvia Ghilezan, Jovanka Pantovi´c, Ivan Proki´c, Alceste Scalas, and Nobuko Yoshida. Precisesubtyping for asynchronous multiparty sessions. In
POPL . ACM Press, 2021. to appear.[GPY17] Eva Graversen, Iain Phillips, and Nobuko Yoshida. Towards a categorical representation of re-versible event structures. In Vasco T. Vasconcelos and Philipp Haller, editors,
PLACES , volume246 of
EPTCS , pages 49–60. Open Publishing Association, 2017.[GPY18] Eva Graversen, Iain Phillips, and Nobuko Yoshida. Event structure semantics of (controlled)reversible CCS. In Jarkko Kari and Irek Ulidowski, editors,
Reversible Computation , volume 11106of
LNCS , pages 122–102. Springer, 2018.[HLV +
16] Hans Hüttel, Ivan Lanese, Vasco T. Vasconcelos, Luís Caires, Marco Carbone, Pierre-MaloDeniélou, Dimitris Mostrous, Luca Padovani, António Ravara, Emilio Tuosto, Hugo Torres Vieira,and Gianluigi Zavattaro. Foundations of session types and behavioural contracts.
ACM ComputingSurveys , 49(1):3:1–3:36, 2016.[HVK98] Kohei Honda, Vasco T. Vasconcelos, and Makoto Kubo. Language primitives and type disciplinefor structured communication-based programming. In Chris Hankin, editor,
ESOP , volume 1381of
LNCS , pages 122–138. Springer, 1998.[HYC08] Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session types. InGeorge C. Necula and Philip Wadler, editors,
POPL , pages 273–284. ACM Press, 2008.[HYC16] Kohei Honda, Nobuko Yoshida, and Marco Carbone. Multiparty asynchronous session types.
Journal of ACM , 63(1):9:1–9:67, 2016.[Kat96] Joost-Pieter Katoen.
Quantitative and qualitative extensions of event structures . PhD thesis, Universityof Twente, 1996.[Kob02] Naoki Kobayashi. A type system for lock-free processes.
Information and Computation , 177(2):122–159, 2002.[Kob05] Naoki Kobayashi. Type-based information flow analysis for the pi-calculus.
Acta Informatica , 42(4-5):291–347, 2005.[Kob06] Naoki Kobayashi. A new type system for deadlock-free processes. In Christel Baier and HolgerHermanns, editors,
CONCUR , volume 4137 of
LNCS , pages 233–247. Springer, 2006.[Lan93] Rom Langerak. Bundle event structures: a non-interleaving semantics for LOTOS. In Michel Diazand Roland Groz, editors,
Formal Description Techniques for Distributed Systems and CommunicationProtocols , pages 331–346. North-Holland, 1993.[LG91] Rita Loogen and Ursula Goltz. Modelling nondeterministic concurrent processes with event struc-tures.
Fundamenta Informaticae , 14(1):39–74, 1991. LMT20] Ugo de’ Liguoro, Hernán C. Melgratti, and Emilio Tuosto. Towards refinable choreographies. InJulien Lange, Anastasia Mavridou, Larisa Safina, and Alceste Scalas, editors,
ICE , volume 324 of
EPTCS , pages 61–77. Open Publishing Association, 2020.[LTY15] Julien Lange, Emilio Tuosto, and Nobuko Yoshida. From communicating machines to graphicalchoreographies. In Sriram K. Rajamani and David Walker, editors,
POPL , pages 221–232. ACMPress, 2015.[LY17] Julien Lange and Nobuko Yoshida. On the undecidability of asynchronous session subtyping.In Javier Esparza and Andrzej S. Murawski, editors,
FOSSACS , volume 10203 of
LNCS , pages441–457, 2017.[Mil91] Robin Milner. The polyadic π -calculus: a tutorial. In Friedrich L. Bauer, Wilfried Brauer, andHelmut Schwichtenberg, editors, Logic and Algebra of Specification , NATO ASI. Springer, 1991.[MP17a] Claudio Antares Mezzina and Jorge A. Pérez. Causally consistent reversible choreographies: amonitors-as-memories approach. In
PPDP , pages 127–138. ACM Press, 2017.[MP17b] Claudio Antares Mezzina and Jorge A. Pérez. Reversibility in session-based concurrency: A freshlook.
Journal of Logic and Algebraic Methods in Programming , 90:2–30, 2017.[MYH09] Dimitris Mostrous, Nobuko Yoshida, and Kohei Honda. Global principal typing in partiallycommutative asynchronous sessions. In Giuseppe Castagna, editor,
ESOP , volume 5502 of
LNCS ,pages 316–332. Springer, 2009.[NPW81] Mogens Nielsen, Gordon Plotkin, and Glynn Winskel. Petri nets, event structures and domains,part I.
Theoretical Computer Science , 13(1):85–108, 1981.[NY17] Rumyana Neykova and Nobuko Yoshida. Let it recover: multiparty protocol-induced recovery.In Peng Wu and Sebastian Hack, editors, CC , pages 98–108. ACM Press, 2017.[Old86] Ernst-Rüdiger Olderog. TCSP: theory of communicating sequential processes. In Wilfried Brauer,Wolfgang Reisig, and Grzegorz Rozenberg, editors, Advances in Petri Nets , volume 255 of
LNCS ,pages 441–465. Springer, 1986.[Pad15] Luca Padovani. Type Reconstruction for the Linear π -Calculus with Composite Regular Types. Logical Methods in Computer Science , 11(4), 2015.[PCPT14] Jorge A. Pérez, Luís Caires, Frank Pfenning, and Bernardo Toninho. Linear logical relations andobservational equivalences for session-based concurrency.
Information and Computation , 239:254–302, 2014.[Pie02] Benjamin C. Pierce.
Types and Programming Languages . MIT Press, 2002.[PS96] Benjamin C. Pierce and Davide Sangiorgi. Typing and subtyping for mobile processes.
Mathemat-ical Structures in Computer Science , 6(5):376–385, 1996.[PU15] Iain C.C. Phillips and Irek Ulidowski. Reversibility and asymmetric conflict in event structures.
Journal of Logical and Algebraic Methods in Programming , 84(6):781 – 805, 2015.[SDC19] Paula Severi and Mariangiola Dezani-Ciancaglini. Observational Equivalence for Multiparty Ses-sions.
Fundamenta Informaticae , 167:267–305, 2019.[TCP11] Bernardo Toninho, Luís Caires, and Frank Pfenning. Dependent session types via intuitionisticlinear type theory. In Peter Schneider-Kamp and Michael Hanus, editors,
PPDP , pages 161–172.ACM Press, 2011.[TG18] Emilio Tuosto and Roberto Guanciale. Semantics of global view of choreographies.
Journal of Logicand Algebraic Methods in Programming , 95:17–40, 2018.[THK94] Kaku Takeuchi, Kohei Honda, and Makoto Kubo. An interaction-based language and its typingsystem. In Chris Hankin, editor,
PARLE , volume 817 of
LNCS , pages 122–138. Springer, 1994.[TY14] Francesco Tiezzi and Nobuko Yoshida. Towards reversible sessions. In Alastair F. Donaldson andVasco T. Vasconcelos, editors,
PLACES , volume 155 of
EPTCS , pages 17–24. Open PublishingAssociation, 2014.[TY16] Francesco Tiezzi and Nobuko Yoshida. Reversing single sessions. In Simon J. Devitt and IvanLanese, editors, RC , volume 9720 of LNCS , pages 52–69. Springer, 2016.[VY10] Daniele Varacca and Nobuko Yoshida. Typed event structures and the linear π -calculus. TheoreticalComputer Science , 411(19):1949–1973, 2010.[Wad14] Philip Wadler. Propositions as sessions.
Journal of Functional Programming , 24(2-3):384–418, 2014.[Win80] Glynn Winskel.
Events in Computation . PhD thesis, University of Edinburgh, 1980.[Win82] Glynn Winskel. Event structure semantics for CCS and related languages. In Mogens Nielsen andErik Meineche Schmidt, editors,
ICALP , volume 140 of
LNCS , pages 561–576. Springer, 1982. Win88] Glynn Winskel. An introduction to event structures. In Jaco W. de Bakker, Willem P. de Roever,and Grzegorz Rozenberg, editors,
REX: Linear Time, Branching Time and Partial Order in Logics andModels for Concurrency , volume 354 of
LNCS , pages 364–397. Springer, 1988.
This work is licensed under the Creative Commons Attribution License. To view a copy of thislicense, visit https://creativecommons.org/licenses/by/4.0/ or send a letter to CreativeCommons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher Strasse2, 10777 Berlin, Germanyor send a letter to CreativeCommons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher Strasse2, 10777 Berlin, Germany