Abstract machines for game semantics, revisited
aa r X i v : . [ c s . L O ] A p r Abstract machines for game semantics, revisited ∗ Olle Fredriksson and Dan R. Ghica
University of Birmingham, UK
October 31, 2018
Abstract
We define new abstract machines for game semantics which correspondto networks of conventional computers, and can be used as an intermedi-ate representation for compilation targeting distributed systems. This isachieved in two steps. First we introduce the HRAM, a
Heap and RegisterAbstract Machine , an abstraction of a conventional computer, which canbe structured into HRAM nets, an abstract point-to-point network model.HRAMs are multi-threaded and subsume communication by tokens ( cf.
IAM) or jumps. Game Abstract Machines (GAM), are HRAMs with ad-ditional structure at the interface level, but no special operational capabil-ities. We show that GAMs cannot be naively composed, but compositionmust be mediated using appropriate HRAM combinators. HRAMs areflexible enough to allow the representation of game models for languageswith state (non-innocent games) or concurrency (non-alternating games).We illustrate the potential of this technique by implementing a toy dis-tributed compiler for ICA, a higher-order programming language withshared state concurrency, thus significantly extending our previous dis-tributed PCF compiler. We show that compilation is sound and memory-safe, i.e. no (distributed or local) garbage collection is necessary.
One of the most profound discoveries in theoretical computer science is thefact that logical and computational phenomena can be subsumed by relativelysimple communication protocols. This understanding came independently fromGirard’s work on the Geometry of Interaction (GOI) [16] and Milner’s work onprocess calculi [22], and had a profound influence on the subsequent developmentof game semantics (see [12] for a historical survey). Of the three, game semanticsproved to be particularly effective at producing precise mathematical models fora large variety of programming languages, solving a long-standing open problemconcerning higher-order sequential computation [1, 19]. ∗ An extended abstract of this paper is due to appear in the Twenty-Eighth AnnualACM/IEEE Symposium on Logic in Computer Science (LICS 2013), June 25-28, 2013, NewOrleans, USA. denotational we mean that it iscompositionally defined on the syntax and by operational we mean that it canbe effectively presented and can form a basis for compilation [13]. This featurewas apparent from the earliest presentations of game semantics [18] and is notvery surprising, although the operational aspects are less perspicuous than in in-terpretations based on process calculi or GOI, which quickly found applicationsin compiler [21] or interpreter [2] development and optimisation.An important development, which provided essential inspiration for this work,was the introduction of the
Pointer Abstract Machine (PAM) and the
Inter-action Abstract Machine (IAM), which sought to fully restore the operationalintuitions of game semantics [5] by relating them to two kinds of abstract ma-chines, one based on term rewriting (PAM) and one based on networks of au-tomata (IAM) profoundly inspired by GOI. A further optimisation of IAM, the
Jumping Abstract Machine (JAM) was introduced subsequently to avoid theoverheads of the IAM [6].
Contribution
In this paper we are developing the line of work on the PAM,IAM, and JAM, in order to define new abstract machines which correspondmore closely to networks of conventional computers and can be used as an in-termediate representation for compilation targeting distributed systems. Thisis achieved in two steps. First we introduce the HRAM, a
Heap and Regis-ter Abstract Machine , an abstraction of a conventional computer, which can bestructured into HRAM nets, an abstract point-to-point network model. HRAMsare multi-threaded and subsume communication by tokens (cf. IAM) or jumps.GAMs,
Game Abstract Machines , are HRAMs with additional structure at theinterface level, but no special operational capabilities. We show that GAMs can-not be naively composed, but composition must be mediated using appropriateHRAM combinators. Starting from a formulation of game semantics in the nom-inal model [9] has two benefits. First, pointer manipulation requires no encodingor decoding, as in integer-based representations, but exploits the HRAM abilityto create locally fresh names . Second, token size is constant as only names arepassed around; the computational history of a token is stored by the HRAMrather than passing it around (cf. IAM). HRAMs are also flexible enough toallow the representation of game models for languages with state ( non-innocent games) or concurrency ( non-alternating games). We illustrate the potential ofthis technique by implementing a compiler targeting distributed systems forICA, a higher-order programming language with shared state concurrency [14],thus significantly extending our previous distributed PCF compiler [8]. We showthat compilation is sound and memory-safe, i.e. no (distributed or local) garbagecollection is necessary. Other related and relevant work
The operational intuitions of GOI wereoriginally confined to the sequential setting, but more recent work on Ludicsshowed how they can be applied to concurrency [7] through an abstract treat-ment not immediately applicable to our needs. Whereas our work takes theIAM/JAM as the starting point, developing abstract machines akin to the PAM Available from http://veritygos.org/gams . In this section we introduce a class of basic abstract machines for manipulatingheap structures, which also have primitives for communications and control.They represent a natural intermediate stage for compilation to machine lan-guage, and will be used as such in Sec. 4. The machines can naturally beorganised into communication networks which give an abstract representationof distributed systems. We find it formally convenient to work in a nominalmodel in order to avoid the difficulties caused by concrete encoding of gamestructures, especially justification pointers , as integers. We assume a certainfamiliarity from the reader with basic nominal concepts. The interested readeris referred to the literature ([10] is a starting point).
We fix a set of port names ( A ) and a set of pointer names ( P ) as disjointsets of atoms. Let L ∆ = { O , P } be the set of polarities of a port. To main-tain an analogy with game semantics from the beginning, port names corre-spond to game-semantic moves and input/output polarities correspond to op-ponent/proponent. A port structure is a tuple ( l, a ) ∈ Port = L × A . An interface A ∈ P fin ( Port ) is a set of port structures such that all port names are unique,i.e. ∀ p = ( l, a ) , p ′ = ( l ′ , a ′ ) ∈ A , if a = a ′ then p = p ′ . Let the support of aninterface be sup ( A ) ∆ = { a | ( l, a ) ∈ A } , its set of port names.The tensor of two interfaces is defined as A ⊗ B ∆ = A ∪ B , where sup ( A ) ∩ sup ( B ) = ∅ . The dual of an interface is defined as A ∗ ∆ = { p ∗ | p ∈ A } where( l, a ) ∗ ∆ = ( l ∗ , a ), O ∗ ∆ = P and P ∗ ∆ = O . An arrow interface is defined in terms oftensor and dual, A ⇒ B ∆ = A ∗ ⊗ B .We introduce notation for opponent ports of an interface A ( O ) ∆ = { ( O , a ) ∈ A } .The player ports of an interface A ( P ) is defined analogously. The set of allinterfaces is denoted by I . We say that two interfaces have the same shape ifthey are equivariant, i.e. there is a permutation π : A → A such that { π · p | p ∈ A } = A , and we write π ⊢ A = A A , where π · ( l, a ) ∆ = ( l, π ( a )) isthe permutation action of π . We may only write A = A A if π is obvious orunimportant.Let the set of data D be ∅ ∈ , pointer names a ∈ P or integers n ∈ Z . Let3he set of instructions Instr be as below, where i, j, k ∈ N + (which permitsignoring results and allocating “null” data). • i ← new j, k allocates a new pointer in the heap and populates it with thevalues stored in registers j and k , storing the pointer in register i . • i, j ← get k reads the tuple pointed at by the name in the register k andstores it in registers i and j . • update i, j writes the value stored in register j to the second componentof the value pointed to by the name in register i . • free i releases the memory pointed to by the name in the register i andresets the register. • flip i, j flips the values of registers i and j . • i ← set j sets register i to value j .Let code fragments C be C ::= Instr ; C | ifzero N C C | spark a | end . The portnames occurring in the code fragment are sup ∈ C → P fin ( A ), defined in theobvious way (only the spark a instruction can contribute names). An ifzero i instruction will branch according to the value stored in register i . A spark a will either jump to a or send a message to a , depending on whether a is a localport or not.An engine is an interface together with a port map, E = ( A, P ) ∈ I× ( sup ( A ( O ) ) →C ) such that for each code fragment c ∈ cod P and each port name a ∈ sup ( c ),( P , a ) ∈ A , meaning that ports that are “sparked” must be output ports of theinterface A . The set of all engines is E .Engines have threads and shared heap. All threads have a fixed number ofregisters r , which is a global constant. For the language ICA we will need fourregisters, but languages with more kinds of pointers in the game model, e.g.control pointers [20], may need and use more registers.A thread is a tuple t = ( c, d ) ∈ T = C × D r : a code fragment and an r -tuple ofdata register values.An engine configuration is a tuple k = ( t, h ) ∈ K = P fin ( T ) × ( P ⇀ P × D ): aset of threads and a heap that maps pointer names to pairs of pointer namesand data items.A pair consisting of an engine configuration and an engine will be written usingthe notation k : E ∈ K × E . Define the function initial ∈ E → K × E as initial ( E ) ∆ = ( ∅ , ∅ ) : E for an engine E . This function pairs the engine up withan engine configuration consisting of no threads and an empty heap.HRAMs communicate using messages , each consisting of a port name and avector of data items of size r m : m = ( x, d ) ∈ M = A × D r m . The constant r m specifies the size of the messages in the network, and has to fulfil r m ≤ r . Fora set X ⊆ A , define M X = X × D r m , the subset of M whose port names arelimited to those of X .We specify the operational semantics of an engine E = ( A, P ) as a transitionrelation − − −−→ E,χ − ⊆ K× ( {•}∪ ( L ×M )) ×K . The relation is either labelled with4 — a silent transition — or a polarised message — an observable transition.The messages will be constructed simply from the first r m registers of a thread,meaning that on certain actions part of the register contents become observablein the transition relation.To aid readability, we use the following shorthands: • n −−→ E,χ n ′ means n • −−→ E,χ n ′ (silent transitions). • n ( a,d ) −−−→ E,χ n ′ means n ( P , ( a,d )) −−−−−−→ E,χ n ′ (output transitions). • n ( a,d ) • −−−−→ E,χ n ′ means n ( O , ( a,d )) −−−−−−→ E,χ n ′ (input transitions).We use the notation d for n -tuples of registers and then d i for the (zero-based) i -th component of d , and d ∅ ∆ = ∅ . For updating a register, we use d [ i := d ] ∆ =( d , · · · , d i − , d, d i +1 , · · · , d n − ) and d [ ∅ := d ] ∆ = d .To construct messages from the register contents of a thread, we use the func-tions msg ∈ D r → D r m , which takes the first r m components of its input, and regs ∈ D r m → D r , which pads its input with ∅ at the end (i.e. regs ( d ) ∆ =( d , . . . , d r m − , ∅ , . . . )).The network connectivity is specified by the function χ , which will be describedin more detail in the next sub-section. For a port name a , χ ( a ) can be read as“the port that a is connected to”. The full operational rules for HRAMs aregiven in Fig. 2.1. The interesting rule is that for spark because it depends onwhether the port where the next computation is “sparked” is local or not. Ifthe port is local then spark makes a jump, and if the port is non-local then itproduces an output token and the current thread of execution is terminated,similar to the IAM. A well-formed
HRAM net S ∈ S is a set of engines, a function over port namesspecifying what ports are connected, and an external interface, S = ( E, χ, A ),where E ∈ E , A ∈ I , and χ is a bijection between the net’s output and inputport names. Specifically, χ has to be in sup ( A ( O ) ⊗ A ( P ) E ) → sup ( A ( P ) ⊗ A ( O ) E ),where A E = ⊗{ A | ( A, P ) ∈ E } .Fig. 2 shows a diagram of an HRAM net with two HRAMs (interfaces A, A ′ , twoports each), each with two running threads ( t i , t ′ i ) with local registers ( d i , d ′ i )and shared heaps ( h, h ′ ). Two of the HRAM ports are connected and two arepart of the global interface B .The function χ gives the net connectivity. It being in sup ( A ( O ) ⊗ A ( P ) E ) → sup ( A ( P ) ⊗ A ( O ) E ) means that it maps each input port name of the net’s interfaceand output port name of the net’s engines to either an output port name of thenet’s interface or an input port name of one of its engines. Since it is a bijection,5 ( i ← new j, k ; C, d ) ∪ t, h ) −−→ E,χ (( C, d [ i := p ]) ∪ t, h ∪ { p ( d j , d k ) } ) if p / ∈ sup ( h )(( i, j ← get k ; C, d ) ∪ t, h ∪ { d k ( d, d ′ ) } ) −−→ E,χ (( C, d [ i := d ][ j := d ′ ]) ∪ t, h ∪ { d k ( d, d ′ ) } )(( update i, j ; C, d ) ∪ t, h ∪ { d i ( d, d ′ ) } ) −−→ E,χ (( C, d [ i := d ][ j := d ′ ]) ∪ t, h ∪ { d i ( d, d j ) } )(( free i ; C, d ) ∪ t, h ∪ { d i ( d, d ′ ) } ) −−→ E,χ (( C, d [ i := ∅ ]) ∪ t, h )(( flip i, j ; C, d ) ∪ t, h ) −−→ E,χ (( C, d [ i := d j ][ j := d j ]) ∪ t, h )(( i ← set j ; C, d ) ∪ t, h ) −−→ E,χ (( C, d [ i := j ]) ∪ t, h )(( ifzero i c c ; C, d [ i := 0]) ∪ t, h ) −−→ E,χ (( c , d [ i := ∅ ]) ∪ t, h )(( ifzero i c c ; C, d [ i := n + 1]) ∪ t, h ) −−→ E,χ (( c , d [ i := ∅ ]) ∪ t, h )(( spark a, d ) ∪ t, h ) ( χ ( a ) , msg ( d )) −−−−−−−−−→ E,χ ( t, h ) if ( O , χ ( a )) / ∈ A (( spark a, d ) ∪ t, h ) −−→ E,χ (( P ( χ ( a )) , regs ( msg ( d ))) ∪ t, h ) if ( O , χ ( a )) ∈ A ( t, h ) ( a,d ) • −−−−→ E,χ (( P ( a ) , regs ( d )) ∪ t, h ) if ( O , a ) ∈ A (( end , d ) ∪ t, h ) −−→ E,χ ( t, h ) Figure 1: Operational semantics of HRAMs h ′ d ′ d ′ d ′ d ′ t ′ t ′ B A ′ d d d d ht t A Figure 2: Example HRAM net6 −−→
E,χ e ′ ( e : E ∪ e : E, m ) −→ ( e ′ : E ∪ e : E, m ) e m −−→ E,χ e ′ ( e : E ∪ e : E, m ) −→ ( e ′ : E ∪ e : E, { m } ⊎ m ) e m • −−→ E,χ e ′ ( e : E ∪ e : E, { m } ⊎ m ) −→ ( e ′ : E ∪ e : E, m )( P , a ) ∈ A ( e : E, { ( a, d ) } ⊎ m ) ( a,d ) −−−→ ( e : E, m )( O , a ) ∈ A ( e : E, m ) ( a,d ) • −−−−→ ( e : E, { ( χ ( a ) , d ) } ⊎ m )Figure 3: Operational semantics of HRAM netseach port name (and thus port) is connected to exactly one other port name,so the abstract network model we are using is point-to-point.For an engine e = ( A, P ), we define a singleton net with e as its sole engine as singleton ( e ) = ( { e } , χ, A ′ ), where A ′ is an interface such that χ ⊢ A = A A ′ and χ is given by: χ ( a ) ∆ = π ( a ) if a ∈ sup ( A ( P ) ) χ ( a ) ∆ = π − ( a ) if a ∈ sup ( A ′ ( O ) )A net configuration is a set of tuples of engine configurations and engines and amultiset of pending messages: n = ( e : E, m ) ∈ N = P fin ( K×E ) × Mset fin ( M ).Define the function initial ∈ S → N as initial ( E, χ, A ) ∆ = ( { initial ( E ) | E ∈ E } , ∅ ), a net configuration with only initial engines and no pending messages.The operational semantics of a net S = ( E, χ, A ) is specified as a transitionrelation − − −→ − ⊆ N × ( {•} ∪ ( L × M sup ( A ) )) × N . The semantics is givenin the style of the Chemical Abstract Machine (CHAM) [3], where HRAMsare “molecules” and the pending messages of the HRAM net is the “solution”.HRAM inputs (outputs) are to (from) the set of pending messages. Silenttransitions of any HRAM are silent transitions of the net. The rules are givenin Fig. 2.2. 7 .3 Semantics of HRAM nets We define
List [ A ] for a set A to be finite sequences of elements from A , anduse s :: s ′ for concatenation. A trace for a net ( E, χ, A ) is a finite sequence ofmessages with polarity: s ∈ List [ L × M sup ( A ) ]. Write α ∈ L × M sup ( A ) forsingle polarised messages. We use the same notational convention as before toidentify inputs ( − • ).For a trace s = α :: α :: · · · :: α n , define s −→ to be the following composition ofrelations on net configurations: α −→−→ ∗ α −→−→ ∗ · · · α n −−→ , where −→ ∗ is the reflexivetransitive closure of −→ , i.e. any number of silent steps are allowed in betweenthose that are observable.Write traces A for the set List [ L × M sup ( A ) ]. The denotation J S K ⊆ traces A ofa net S = ( E, χ, A ) is the set of traces of observable transitions reachable fromthe initial net configuration initial ( S ) using the transition relation: J S K ∆ = { s ∈ traces A | ∃ n. initial ( S ) s −→ n } The denotation of a net includes the empty trace and is prefix-closed by con-struction.As with interfaces, we are not interested in the actual port names occurringin a trace, so we define equivariance for sets of traces. Let S ⊆ traces A and S ⊆ traces A for A , A ∈ I . S = A S if and only if there is a permutation π ∈ A → A such that { π · s | s ∈ S } = S , where π · ǫ ∆ = ǫ and π · ( s ::( l, ( a, d ))) ∆ =( π · s )::( l, ( π ( x ) , d )).Define the deletion operation s − A which removes from a trace all elements( l, ( x, d )) if x ∈ sup ( A ) and define the interleaving of sets of traces S ⊆ traces A and S ⊆ traces B as S ⊗ S = { s | s ∈ traces A ⊗ B ∧ s − B ∈ S ∧ s − A ∈ S } .Define the composition of the sets of traces S ⊆ traces A ⇒ B and S ⊆ traces B ′ ⇒ C with π ⊢ B = A B ′ as the usual synchronisation and hiding in trace semantics: S ; S = { s − B | s ∈ traces A ⊗ B ⊗ C ∧ s − C ∈ S ∧ π · s ∗ B − A ∈ S } (where s ∗ B is s where the messages from B have reversed polarity.)Two nets, f = ( E f , χ f , I f ) and g = ( E g , χ g , I g ) are said to be structurallyequivalent if they are graph-isomorphic, i.e. π · E f = E g , π ⊢ I f = A I g and χ g ◦ π = π ◦ χ f . Theorem 2.1. If S and S are structurally equivalent nets, then J S K = A J S K .Proof. A straightforward induction on the trace length, in both directions.
In this sub-section we will show that HRAM nets form a symmetric compact-closed category. This establishes that our definitions are sensible and thatHRAM nets are equal up to topological isomorphisms. This result also showsthat the structure of HRAM nets is very loose.8he category, called
HRAMnet , is defined as follows: • Objects are interfaces A ∈ P fin ( Port ) identified up to A -equivalence. • A morphism f : A → B is a well-formed net on the form ( E, χ, A ⇒ B ), forsome E and χ . We will identify morphisms that have the same denotation,i.e. if J f K = A J g K then f = g (in the category). • The identity morphism for an object A is id A ∆ = ( ∅ , χ, A ⇒ A ′ )for an A ′ such that π ⊢ A = A A ′ and χ ( a ) ∆ = π ( a ) if a ∈ sup ( A ∗ ( O ) ) χ ( a ) ∆ = π − ( a ) if a ∈ sup ( A ′ ( O ) ).Note that A ⇒ A ′ = A ∗ ∪ A ′ . This means that the identity is pureconnectivity. • Composition of two morphisms f = ( E f , χ f , A ⇒ B ) : A → B and g = ( E g , χ g , B ′ ⇒ C ) : B ′ → C , such that π ⊢ B = A B ′ , is f ; g = ( E f ∪ E g , χ f ; g , A ⇒ C ) : A → C where χ f ; g ( a ) ∆ = χ f ( a ) if a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ) ∧ χ f ( a ) / ∈ sup ( B ) χ f ; g ( a ) ∆ = χ g ( a ) if a ∈ sup ( C ( O ) ⊗ I ( P ) g ) ∧ χ g ( a ) / ∈ sup ( B ′ ) χ f ; g ( a ) ∆ = χ g ( π ( χ f ( a ))) if a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ) ∧ χ f ( a ) ∈ sup ( B ) χ f ; g ( a ) ∆ = χ f ( π − ( χ g ( a ))) if a ∈ sup ( C ( O ) ⊗ I ( P ) g ) ∧ χ g ( a ) ∈ sup ( B ′ )and I f ∆ = ⊗{ A | ( A, P ) ∈ E f } I g ∆ = ⊗{ A | ( A, P ) ∈ E g } . Note
We identify HRAMs with interfaces of the same shape in the category,which means that our objects and morphisms are in reality unions of equivariantsets. In defining the operations of our category we use representatives for thesesets, and require that the representatives are chosen such that their sets ofport names are disjoint (but same-shaped when the operation calls for it). Thecomposition operation may appear to be partial because of this requirement,but we can always find equivariant representatives that fulfil it.It is possible to find other representations of interfaces that do not rely onequivariance. For instance, an interface could simply be two natural numbers— the number of input and output ports. Another possibility would be to makethe tensor the disjoint union operator. Both of these would, however, leadto a lot of bureaucracy relating to injection functions to make sure that portconnections are routed correctly. Our formulation, while seemingly complex,leads to very little bureaucracy, and is easy to implement.9 roposition 2.2.
HRAMnet is a category.Proof. • Composition is well-defined, i.e. it preserves well-formedness.Let f = ( E f , χ f , A ⇒ B ) : A → B and g = ( E g , χ g , B ′ ⇒ C ) : B ′ → C be morphisms such that π ⊢ B = A B ′ , and their composition f ; g =( E f ∪ E g , χ, A ⇒ C ) : A → C be as in the definition of composition. Toprove that this is well-formed, we need to show that χ ∈ sup (( A ⇒ C ) ( O ) ⊗ I ( P ) fg ) → sup (( A ⇒ C ) ( P ) ⊗ I ( O ) fg ) = sup ( A ∗ ( O ) ⊗ C ( O ) ⊗ I ( P ) f ⊗ I ( P ) g ) → sup ( A ∗ ( P ) ⊗ C ( O ) ⊗ I ( O ) f ⊗ I ( O ) g )where I fg = ⊗{ A | ( A, P ) ∈ E f ∪ E g } , and that it is a bijection.We are given that χ f ∈ sup ( A ∗ ( O ) ⊗ B ( O ) ⊗ I ( P ) f ) → sup ( A ∗ ( P ) ⊗ B ( P ) ⊗ I ( O ) f ) χ g ∈ sup ( B ′∗ ( O ) ⊗ C ( O ) ⊗ I ( P ) g ) → sup ( B ′∗ ( P ) ⊗ C ( P ) ⊗ I ( O ) g ) π ∈ sup ( B ) → sup ( B ′ )are bijections.It is relatively easy to see that the domains specified in the clauses ofthe definition of χ are mutually disjoint sets and that their union is thedomain that we are after.Since χ is defined in clauses each of which defined using either χ f or χ g and/or π (which are bijections with disjoint domains and codomains), it isenough to show that the set of port names that χ f is applied to in clause1 and 4 are disjoint, and similarly for χ g in clause 2 and 3: – In clause 4, we have χ g ( a ) ∈ sup ( B ′ ), and so π − ( χ g ( a )) ∈ sup ( B ),which is disjoint from sup ( A ∗ ( O ) ⊗ I ( P ) f ) in clause 1. – In clause 3, we have χ f ( a ) ∈ sup ( B ), and so π ( χ f ( a )) ∈ sup ( B ′ ),which is disjoint from sup ( C ( O ) ⊗ I ( P ) g ) in clause 2. • Composition is associative.Let f = ( E f , χ f , A ⇒ B ) : A → B , g = ( E g , χ g , B ′ ⇒ C ) : B ′ → C , and h = ( E h , χ h , C ′ ⇒ D ) : C ′ → D be nets such that π ⊢ B = A B ′ and π ⊢ C = A C ′ . Then we have:( f ; g ); h = ( E f ∪ E g ∪ E h , χ ( f ; g ); h , A ⇒ D )and f ; ( g ; h ) = ( E f ∪ E g ∪ E h , χ f ;( g ; h ) , A ⇒ D )according to the definition of composition. We need to show that χ ( f ; g ); h = χ f ;( g ; h ) , which implies that ( f ; g ); h = f ; ( g ; h ).10e do this by expanding the definitions, simplified using the followingauxiliary function: connect ( c, A )( a ) ∆ = a if a / ∈ sup ( A ) connect ( c, A )( a ) ∆ = c ( a ) if a ∈ sup ( A ) f ; g = ( E f ∪ E g , χ f ; g , A ⇒ C ) and g ; h = ( E g ∪ E h , χ g ; h , B ′ ⇒ D ) where χ f ; g ( a ) ∆ = connect ( χ g ◦ π , B )( χ f ( a )) if a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ) χ f ; g ( a ) ∆ = connect ( χ f ◦ π − , B ′ )( χ g ( a )) if a ∈ sup ( C ( O ) ⊗ I ( P ) g ) χ g ; h ( a ) ∆ = connect ( χ h ◦ π , C )( χ g ( a )) if a ∈ sup ( B ′∗ ( O ) ⊗ I ( P ) g ) χ g ; h ( a ) ∆ = connect ( χ g ◦ π − , C ′ )( χ h ( a )) if a ∈ sup ( D ( O ) ⊗ I ( P ) h )Now χ ( f ; g ); h and χ f ;( g ; h ) are defined as follows: χ ( f ; g ); h ( a ) ∆ = connect ( χ h ◦ π , C )( χ f ; g ( a )) if a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ; g ) χ ( f ; g ); h ( a ) ∆ = connect ( χ f ; g ◦ π − , C ′ )( χ h ( a )) if a ∈ sup ( D ( O ) ⊗ I ( P ) h ) χ f ;( g ; h ) ( a ) ∆ = connect ( χ g ; h ◦ π , B )( χ f ( a )) if a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ) χ f ;( g ; h ) ( a ) ∆ = connect ( χ f ◦ π − , B ′ )( χ g ; h ( a )) if a ∈ sup ( D ( O ) ⊗ I ( P ) g ; h )One way to see that these two bijective functions are equal is to view themas case trees, and consider every case. There are 13 such cases to consider,out of which three are not possible.We show three cases:1. If a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ), χ f ( a ) / ∈ sup ( B ), and χ f ( a ) / ∈ sup ( C ), then χ ( f ; g ); h ( a )= connect ( χ h ◦ π , C )( χ f ; g ( a ))= connect ( χ h ◦ π , C )( χ f ( a ))= χ f ( a )and χ f ;( g ; h ) ( a )= connect ( χ g ; h ◦ π , B )( χ f ( a ))= χ f ( a )and thus equal.2. Consider the case where a ∈ sup ( A ∗ ( O ) ⊗ I ( P ) f ), χ f ( a ) / ∈ sup ( B ),and χ f ( a ) ∈ sup ( C ). This case is not possible, since sup ( C ) is not asubset of the codomain of χ f ( a ), which is sup ( A ∗ ( P ) ⊗ B ( P ) ⊗ I ( O ) f ).11. If a ∈ sup ( D ( O ) ⊗ I ( P ) h ), χ h ( a ) ∈ sup ( C ′ ), π − ( χ h ( a )) ∈ sup ( C ( O ) ⊗ I ( P ) g ), and χ g ( π − ( χ h ( a ))) ∈ sup ( B ′ ), then χ ( f ; g ); h ( a )= connect ( χ f ; g ◦ π − , C ′ )( χ h ( a ))= χ f ; g ( π − ( χ h ( a )))= connect ( χ f ◦ π − , B ′ )( χ g ( π − ( χ h ( a ))))= χ f ( π − ( χ g ( π − ( χ h ( a )))))and χ f ;( g ; h ) ( a )= connect ( χ f ◦ π − , B ′ )( χ g ; h ( a ))= connect ( χ f ◦ π − , B ′ )( connect ( χ g ◦ π − , C ′ )( χ h ( a )))= connect ( χ f ◦ π − , B ′ )( χ g ( π − ( χ h ( a ))))= χ f ( π − ( χ g ( π − ( χ h ( a )))))and thus equal.The other cases are done similarly. • id A is well-formed. For any interface A , id A ∆ = ( ∅ , χ, A ⇒ A ′ )for an A ′ such that π ⊢ A = A A ′ and χ ( a ) ∆ = π ( a ) if a ∈ sup ( A ∗ ( O ) ) χ ( a ) ∆ = π − ( a ) if a ∈ sup ( A ′ ( O ) .)according to the definition.We need to show that χ is a bijection: χ ∈ sup (( A ⇒ A ′ ) ( O ) ) → sup (( A ⇒ A ′ ) ( P ) )= sup ( A ∗ ( O ) ∪ A ′ ( O ) ) → sup ( A ∗ ( P ) ∪ A ′ ( P ) )This is true since π is a bijection in sup ( A ) → sup ( A ′ ). • id A is an identity. For any morphism f : A → B we observe that id A ; f is structurally equivalent to f , so by Theorem 2.1, J id A ; f K = A J f K .The case for f ; id B is similar.We will now show that HRAMnet is a symmetric monoidal category: • The tensor product of two objects
A, B , A ⊗ B has already been de-fined. We define the tensor of two morphisms f = ( E f , χ f , A ⇒ B ) , g =( E g , χ g , C ⇒ D ) as f ⊗ g = ( E f ∪ E g , χ f ⊗ χ g , A ⊗ C ⇒ B ⊗ D ).12 The unit object is the empty interface, ∅ . • Since A ⊗ ( B ⊗ C ) = A ∪ B ∪ C = ( A ⊗ B ) ⊗ C we define the associator α A,B,C ∆ = id A ⊗ B ⊗ C with the obvious inverse. • Similarly, since ∅ ⊗ A = ∅ ∪ A = A = A ∪ ∅ = A ⊗ ∅ , we define the leftunitor λ A ∆ = id A and the right unitor ρ A ∆ = id A . • Since A ⊗ B = A ∪ B = B ∪ A = B ⊗ A we define the commutativityconstraint γ A,B ∆ = id A ⊗ B . Proposition 2.3.
HRAMnet is a symmetric monoidal category.Proof. • The tensor product is well-defined, i.e. for two morphisms f, g , f ⊗ g is a well-formed net. This is easy to see since f and g are well-formed. • The tensor product is a bifunctor: – id A ⊗ id B = ( ∅ , χ ⊗ χ , A ⊗ B ⇒ A ′ ⊗ B ′ ) = id A ⊗ B by the definitionof id A ⊗ B . – ( f ; g ) ⊗ ( h ; i ) = f ⊗ h ; g ⊗ i by the definition of composition and tensoron morphisms. • The coherence conditions of the natural isomorphisms are trivial since theisomorphisms amount to identities.Next we show that
HRAMnet is a compact-closed category: • We have already defined the dual A ∗ of an object A . • Since ∅ ⇒ ( A ∗ ⊗ A ′ ) = ∅ ∗ ∪ ( A ∗ ∪ A ′ ) = A ⇒ A ′ we can define the unit η A ∆ = id A and since A ⊗ A ′∗ ⇒ ∅ = ( A ∪ A ′∗ ) ∗ ∪ ∅ = A ∗ ∪ A ′ = A ⇒ A ′ we can define the counit ε A ∆ = id A .This leads us directly to the following result — what we set out to show: Proposition 2.4.
HRAMnet is a symmetric compact-closed category.
The following two theorems can be proved by induction on the trace length,and provide a connection between the
HRAMnet tensor and composition andtrace interleaving and composition.
Theorem 2.5. If f : A → B and g : C → D are morphisms of HRAMnet then J f ⊗ g K = J f K ⊗ J g K . Theorem 2.6. If f : A → B and g : B ′ → C are morphisms of HRAMnet such that π ⊢ B = A B ′ then J f ; g K = J f K ; J g K . The following result explicates how communicating HRAMs can be combinedinto a single machine, where the intercommunication is done with jumpingrather than message passing, in a sound way:13 heorem 2.7. If E = ( A , P ) and E = ( A , P ) are engines and S =( { E , E } , χ, A ) is a net, then E = ( A ⊗ A , P ∪ P ) is an engine, S ′ =( { E } , χ, A ) is a net and J S K ⊆ J S ′ K .Proof. We show that for any trace s , s ∈ J S K implies s ∈ J S ′ K by induction onthe length of the trace. Hypothesis. If s ∈ J S K and thus initial ( S ) s −→ ( { ( t , h ) : E , ( t , h ) : E } , m )for some sets of threads t and t , heaps h and h , and a multiset ofmessages m , then initial ( S ′ ) s −→ ( { ( t ∪ t ∪ t p , h ∪ h ) : E } , m p ) where t p is a set of threads and m p is a multiset of messages such that:1. each t ∈ t p is on the form t = ( spark a, d ) with χ ( a ) ∈ sup ( A ⊗ A ),and2. m = m p ⊎ { ( χ ( a ) , msg ( d )) | ( spark a, d ) ∈ t p } .Intuitively, the net where E and E have been combined into one enginewill not have pending messages (in m ) for communications between E and E , but it can match the behaviour of such messages by threads thatare just about to spark. Base case.
Since any net can take zero steps, the case when s = ǫ is trivial. Inductive step. If s = s ′ :: α and the hypothesis holds for s ′ , then we have initial ( S ) s ′ −→ ( { ( t , h ) : E , ( t , h ) : E } , m ) −→ ∗ α −→ ( { ( t ′ , h ′ ) : E , ( t ′ , h ′ ) : E } , m ′ ) initial ( S ′ ) s ′ −→ ( { ( t ∪ t ∪ t p , h ∪ h ) : E } , m p )with t p and m ′ as in the hypothesis. We first show that S ′ can match thesilent steps that S performs, by induction on the number of steps, usingthe same induction hypothesis as above: Base case.
Trivial.
Inductive step.
Assume that we have initial ( S ) s ′ −→−→ ∗ ( { ( t , h ) : E , ( t , h ) : E } , m ) initial ( S ′ ) s ′ −→−→ ∗ ( { ( t ∪ t ∪ t p , h ∪ h ) : E } , m p )Such that the induction hypothesis holds. We need to show that anystep ( { ( t , h ) : E , ( t , h ) : E } , m ) −→ ( { ( t ′ , h ′ ) : E , ( t ′ , h ′ ) : E } , m ′ )can be matched by (any number of) silent steps of the S ′ configura-tion, such that the induction hypothesis still holds. • A thread of S performs a silent step. This is trivial, since thethreads of the engine configuration of S ′ includes all threads ofthe configurations of S , and its heap is the union of those of S .14 A thread of S does an internal engine send step. Since t ∪ t ∪ t p includes all threads of the S configuration, and for the port name a in question χ ( a ) ∈ A ∪ A = A ⊗ A , this can be matched bythe configuration of S ′ such that the induction hypothesis stillholds. • A thread S does an external engine send. This means that thereis a thread t ∈ t ∪ t on the form t = ( spark a, d ), which afterthe step will be removed, adding the message ( χ ( a ) , msg ( d )) toits multiset of messages, i.e. m ′ = m ⊎ { ( χ ( a ) , msg ( d )) } .If χ ( a ) ∈ A ∪ A , then the configuration S ′ can take zero steps,and thus include t in the set of threads ready to spark. The in-duction hypothesis still holds, since m ′ = m ⊎{ ( χ ( a ) , msg ( d )) } = m p ⊎ { ( χ ( a ) , msg ( d )) | ( spark a, d ) ∈ t p } ⊎ { ( χ ( a ) , msg ( d )) } = m p ⊎ { ( χ ( a ) , msg ( d )) | ( spark a, d ) ∈ t p ∪ { t }} .If χ ( a ) ∈ I , then the configuration of S ′ can match the step of S , removing the thread t from also its set of threads. It is easyto see that the induction hypothesis holds also in this case. • An engine of S receives a message. This means that m = { ( a, d ) } ⊎ m ′ for a message such that the port ( O , a ) ∈ A ∪ A = A ⊗ A . Then either ( a, d ) is in m p or in { ( χ ( a ) , msg ( d )) | ( spark a, d ) ∈ t p } . If it is the former, E can receive the messageand start a thread equal to that started in the configuration of S . If it is the latter, there is a thread t = ( spark χ − ( a ) , d ′ ) ∈ t p with d = msg ( d ′ ) that can first take a send m step, adding it tothe multiset of pending messages of the configuration of S ′ , andthen it can be received as in S .Next we show that the α step can be matched: Assume that we have initial ( S ) s ′ −→−→ ∗ ( { ( t , h ) : E , ( t , h ) : E } , m ) initial ( S ′ ) s ′ −→−→ ∗ ( { ( t ∪ t ∪ t p , h ∪ h ) : E } , m p )Such that the induction hypothesis holds. We need to show that for any α , a step( { ( t , h ) : E , ( t , h ) : E } , m ) α −→ ( { ( t ′ , h ′ ) : E , ( t ′ , h ′ ) : E } , m ′ )can be matched by the S ′ configuration, such that the induction hypothesisstill holds. We have two cases: • The configuration of S performs a send step. That is m = { m } ⊎ m ′ for an m = ( a, d ) such that ( P , a ) ∈ A . Since sup ( A ) is disjoint from sup ( A ∪ A ), the message is also in m p , so the configuration of S ′ can match the step. • The configuration of S performs a receive step. This case is easy, as S and S ′ have the same interface A .15e define a family of projection HRAM nets Π i,A ⊗···⊗ A n : A ⊗ · · · ⊗ A n → A i by first constructing a family of “sinks” ! A : A → I ∆ = singleton (( A ⇒ I, P ))where I = ∅ and P ( a ) = end for each a in its domain and then defining e.g.Π ,A ⊗ B : A ⊗ B → A ∆ = id A ⊗ ! B . The structure of a
HRAMnet token is determined by the number of registers r and the message size r m , which are globally fixed. To implement game-semanticmachines we require four message components: a port name, two pointer names,and a data fragment, meaning that r m = 3. We choose r = 4, to get anadditional register for temporary thread values to work with. From this pointon, messages in nets and traces will be restricted to this form.The message structure is intended to capture the structure of a move when gamesemantics is expressed in the nominal model. The port name is the move, thefirst name is the “point” whereas the second name is the “butt” of a justificationarrow, and the data is the value of the move. This direct and abstract encodingof the justification pointer as names is quite different to that used in PAM andin other GOI-based token machines. In PAM the pointer is represented by asequence of integers encoding the hereditary justification of the move, whichis a snap-shot of the computational causal history of the move, just like inGOI-based machines. Such encodings have an immediate negative consequence,as tokens can become impractically large in complex computations, especiallyinvolving recursion. Large tokens entail not only significant communicationoverheads but also the computational overheads of decoding their structure. Asubtler negative consequence of such an encoding is that it makes supportingthe semantic structures required to interpret state and concurrency needlesslycomplicated and inefficient. The nominal representation is simple and compact,and efficiently exploits local machine memory (heap) in a way that previousabstract machines, of a “functional” nature, do not.The price that we pay is a failure of compositionality, which we will illustrateshortly. The rest of the section will show how compositionality can be restoredwithout substantially changing the HRAM framework. If in HRAM nets com-positionality is “plug-and-play”, as apparent from its compact-closed structure, Game Abstract Machine (GAM) composition must be mediated by a family ofoperators which are themselves HRAMs.In this simple motivating example it is assumed that the reader is familiar withgame semantics, and several of the notions to be introduced formally in the nextsub-sections are anticipated. We trust that this will be not confusing.Let S be a HRAM representing the game semantic model for the successor operation S : int → int . The HRAM net in Fig. 4 represents a (failed) attemptto construct an interpretation for the term x : int ⊢ S ( S ( x )) : int in a context C [ − int ] : int . This is the standard way of composing GOI-like machines.The labels along the edges of the HRAM net trace a token ( a, p , p , d ) sent bythe context C [ − ] in order to evaluate the term. We elide a and d , which areirrelevant, to keep the diagram uncluttered. The token is received by S and16 ✲ ✛✛ S ( p , p ) S C [ − ]( p , p )( p , p ) Figure 4: Non-locality of names in HRAM compositionpropagated to the other S HRAM, this time with tokens ( p , p ). This traceof events ( p , p )::( p , p ) corresponds to the existence of a justification pointerfrom the second action to the first in the game model. The essential correctnessinvariant for a well-formed trace representing a game-semantic play is that eachtoken consists of a known name and a fresh name (if locally created, or unknown if externally created). However, the second S machine will respond with ( p , p )to ( p , p ), leading to a situation where C [ − ] receives a token formed from twounknown tokens.In game semantics, the composition of ( p , p )::( p , p ) with ( p , p )::( p , p )should lead to ( p , p )::( p , p ), as justification pointers are “extended” so thatthey never point into a move hidden through composition. This is precisely whatthe composition operator, a specialised HRAM, will be designed to achieve. Definition 3.1.
We define a game interface (cf. arena ) as a tuple A =( A, qst A , ini A , ⊢ A ) where • A ∈ I is an interface. For game interfaces A , B , C we will write A, B, C and so on for their underlying interfaces. • The set of ports is partitioned into a subset of question port names qst A and one of answer port names ans A , qst A ⊎ ans A = sup( A ) . • The set of initial port names ini A is a subset of the O -labelled questionports. • The enabling relation ⊢ A relates question port names to non-initial portnames such that if a ⊢ A a ′ for port names a ∈ qst A with ( l, a ) ∈ A and a ′ ∈ sup( A ) \ ini A with ( l ′ , a ′ ) ∈ A , then l = l ′ . For notational consistency, write opp A ∆ = sup ( A ( O ) ) and prop A ∆ = sup ( A ( P ) ).Call the set of all game interfaces I G . Game interfaces are equivariant, π ⊢ A = A B , if and only if π ⊢ A = A B , { π ( a ) | a ∈ qst A } = qst B , { π ( a ) | a ∈ ini A } = ini B and { ( π ( a ) , π ( a ′ )) | a ⊢ A a ′ } = ⊢ B . Definition 3.2.
For game interfaces (with disjoint sets of port names) A and B , we define: A ⊗ B ∆ = ( A ⊗ B, qst A ∪ qst B , ini A ∪ ini B , ⊢ A ∪ ⊢ B ) A ⇒ B ∆ = ( A ⇒ B, qst A ∪ qst B , ini B , ⊢ A ∪ ⊢ B ∪ (ini B × ini A )) . GAM net is a tuple G = ( S, A ) ∈ S × I G consisting of a net and a gameinterface such that S = ( E, χ, A ), i.e. the interface of the game net is thesame as that of the game interface. The denotational semantics of a GAM net G = ( S, A ) is just that of the underlying HRAM net: J G K ∆ = J S K . To be able to use game semantics as the specification for game nets we definethe usual legality conditions on traces, following [9].
Definition 3.3.
The coabstracted and free pointers cp and fp ∈ traces → P ( P ) are: cp( ǫ ) ∆ = ∅ cp( s ::( l, ( a, p, p ′ , d ))) ∆ = cp( s ) ∪ { p ′ } fp( ǫ ) ∆ = ∅ fp( s ::( l, ( a, p, p ′ , d )) ∆ = fp( s ) ∪ ( { p } \ cp( s ))The pointers of a trace ptrs ( s ) = cp ( s ) ∪ fp ( s ). Definition 3.4.
Define enabled A ∈ traces A → P (sup( A ) × P ) inductively asfollows: enabled A ( ǫ ) ∆ = ∅ enabled A ( s ::( l, ( a, p, p ′ , d ))) ∆ = enabled A ( s ) ∪ { ( a ′ , p ′ ) | a ⊢ A a ′ } Definition 3.5.
We define the following relations over traces: • Write s ′ ≤ s if and only if there is a trace s such that s ′ :: s = s , i.e. s ′ is a prefix of s . • Write s ′ ≤ s if and only if there are traces s , s such that s :: s ′ :: s = s ,i.e. s ′ is a segment of s . Definition 3.6.
For an arena A and a trace s ∈ traces A , we define the followinglegality conditions: • s has unique pointers when s ′ ::( l, ( a, p, p ′ , d )) ≤ s implies p ′ / ∈ ptrs( s ′ ) . • s is correctly labelled when ( l, ( a, p, p ′ , d )) ⊆ s implies a ∈ sup( A ( l ) ) . • s is justified when s ′ ::( l, ( a, p, p ′ , d )) ≤ s and a / ∈ ini A implies ( a, p ) ∈ enabled A ( s ′ ) . • s is well-opened when s ′ ::( l, ( a, p, p ′ , d )) ≤ s implies a ∈ ini A and s ′ = ǫ . • s is strictly scoped when ( l, ( a, p, p ′ , d )):: s ′ ⊆ s with a ∈ ans A implies p / ∈ fp( s ′ ) . s is strictly nested when ( l , ( a , p, p ′ , d )):: s ′ ::( l , ( a , p ′ , p ′′ , d )):: s ′′ ::( l , ( a , p ′ , p ′′′ , d )) ⊆ s implies ( l , ( a , p ′′ , − , d )) ⊆ s ′′ for port names a , a ∈ qst A and a , a ∈ ans A . • s is alternating when ( l , m )::( l , m ) ⊆ s implies l = l . Definition 3.7.
We say that a question message α = ( l, ( a, p, p ′ , d )) ( a ∈ qst A )is pending in a trace s = s :: α :: s if and only if there is no answer α ′ =( l ′ , ( a ′ , p ′ , p ′′ , d ′ )) ⊆ s ( a ′ ∈ ans A ), i.e. the question has not been answered. Write P A for the subset of traces A consisting of the traces that have uniquepointers, are correctly labelled, justified, strictly scoped and strictly nested.For a set of traces P , write P alt for the subset consisting of only alternatingtraces, and P st (for single-threaded) for the subset consisting of only well-openedtraces. Definition 3.8. If s ∈ traces and X ⊆ P , define the hereditarily justified trace s ↾ X , where inductively ( s ′ , X ′ ) = s ↾ X : ǫ ↾ X ∆ = ( ǫ, X ) s ::( l, ( a, p, p ′ , d )) ↾ X ∆ = ( s ′ ::( l, ( a, p, p ′ , d )) , B ∪ { p ′ } ) if p ∈ X ′ s ::( l, ( a, p, p ′ , d )) ↾ X ∆ = ( s ′ , B ) if p / ∈ X ′ Write s ↾ X for s ′ when s ↾ X = ( s ′ , X ′ ) when it is convenient. The quintessential game-semantic behaviour is that of the copy-cat strategy , asit appears in various guises in the representation of all structural morphisms ofany category of strategies. A copy-cat not only replicates the behaviour of itsOpponent in terms of moves, but also in terms of justification structures. Be-cause of this, the copy-cat strategy needs to be either history-sensitive (stateful)or the justification information needs to be carried along with the token. Wetake the former approach, in contrast to IAM and other GOI-inspired machines.Consider the identity (or copycat) strategy on com ⇒ com , where com is atwo-move arena (one question, one answer). A typical play may look as in Fig. 5.The full lines represent justification pointers, and the trace (play) is representednominally as( r , p , p )::( r , p , p )::( r , p , p )::( r , p , p )::( d , p ) · · · To preserve the justification structure, a copycat engine only needs to store“copycat links”, which are shown as dashed lines in the diagram between ques-tion moves. In this instance, for an input on r , a heap value mapping a freshlycreated p (the pointer to r ) to p (the pointer from r ) is added.The reason for mapping p to p becomes clear when the engine later gets aninput on r with pointers p and p . It can then replicate the move to r , but19 com com ) ( com com ) r O r P r O r P d O d P d O d P ⇒ → ⇒ Figure 5: A typical play for copycatusing p as a justifier. By following the p pointer in the heap it gets p so itcan produce ( r , p , p ), where p is a fresh heap value mapping to p . Whenreceiving an answer, i.e. a d move, the copycat link can be dereferenced andthen discarded from the heap.The following HRAM macro-instructions are useful in defining copy-cat ma-chines to, respectively, handle the pointers in an initial question, a non-initialquestion and an answer: cci ∆ = flip ,
1; 1 ← new , ccq ∆ = 1 ← new ,
3; 0 , ← get cca ∆ = flip ,
1; 0 , ← get free A and A ′ such that π ⊢ A = A A ′ , we define a generalisedcopycat engine as CC C,π, A = ( A ⇒ A ′ , P ), where: P ∆ = { q C ; spark q | q ∈ ini A ′ ∧ q = π − ( q ) }∪ { q ccq ; spark q | q ∈ ( opp A ′ ∩ qst A ′ ) \ ini A ′ ∧ q = π − ( q ) }∪ { a cca ; spark a | a ∈ opp A ′ ∩ ans A ′ ∧ a = π − ( a ) }∪ { q ccq ; spark q | q ∈ opp A ∩ qst A ∧ q = π ( q ) }∪ { a cca ; spark a | a ∈ opp A ∩ ans A ∧ a = π ( a ) } This copycat engine is parametrised with an initial instruction C , which is runwhen receiving an initial question. The engine for an ordinary copycat, i.e. theidentity of games, is CC cci ,π, A . By slight abuse of notation, write CC A for thesingleton copycat game net ( singleton ( CC cci ,π, A ) , A ⇒ π · A ).Following [9], we define a partial order ≤ over polarities, L , as O ≤ O , O ≤ P , P ≤ P and a preorder over traces from P A to be the least reflexive andtransitive such that if l ≤ l then s ::( l , ( a , p , p ′ , d ))::( l , ( a , p , p ′ , d )):: s s ::( l , ( a , p , p ′ , d ))::( l , ( a , p , p ′ , d )):: s , p ′ = p . A set of traces S ⊆ P A is saturated if and only if, for s, s ′ ∈ P A , s ′ s and s ∈ S implies s ′ ∈ S . If S ⊆ P A is a set of traces, let sat ( S ) be thesmallest saturated set of traces that contains S .The usual definition of the copycat strategy (in the alternating and single-threaded setting) as a set of traces is cc st,alt A , A ′ ∆ = { s ∈ P st,alt A ⇒ A ′ | ∀ s ′ ≤ even s. s ′∗ ↾ A = AP s ′ ↾ A ′ } Definition 3.9.
A set of traces S is P -closed with respect to a set of traces S if and only if s ′ ∈ S ∩ S and s = s ′ ::( P , ( a, p, p ′ , d )) ∈ S implies s ∈ S . The intuition of P -closure is that if the trace s ′ is “legal” according to S , thenany outputs that can occur after s ′ in S are also legal. Definition 3.10.
We say that a GAM net f implements a set of traces S ifand only if S ⊆ J f K and J f K is P -closed with respect to S . This is the form of the statements of correctness for game nets that we want; itcertifies that the net f can accommodate all traces in S and, furthermore, thatit only produces legal outputs when given valid inputs.The main result of this section establishes the correctness of the GAM for copy-cat. Theorem 3.11. CC π, A implements cc A ,π · A . This is a direct corollary of the Lem. 3.13,3.16,3.17, 3.18, and 3.22 given below.
Lemma 3.12. If n = ( e : E, m ) and n ′ = ( e ′ : E, m ′ ) are net configurationsof a net f = ( E, χ, A ) , and n x ) −−→ n ′ ( ( x ) ∈ {•}∪ ( L ×M sup( A ) ) then n x ) −−→ n ′ where n = ( e : E, m ⊎ { m } ) and n ′ = ( e ′ : E, m ′ ⊎ { m } ) .Proof. By cases on ( x ): • If ( x ) = • , then e : E = { e : E } ∪ e : E , e ( y ) −−→ E,χ e ′ for some ( y ), e ′ : E = { e ′ : E } ∪ e ′ : E . We have three cases for ( y ): – If ( y ) = • , then e −−→ E,χ e ′ and m ′ = m . Then we also have n = ( { e : E } ∪ e : E , m ⊎ { m } ) −→ ( { e ′ : E } ∪ e ′ : E , m ⊎ { m } ) = n ′ . – If ( y ) = ( P , m ′ ), then e m ′ −−→ E,χ e ′ and m ′ = { m ′ } ∪ m . Then we alsohave n = ( { e : E }∪ e : E , m ⊎{ m } ) −→ ( { e ′ : E }∪ e ′ : E , { m ′ }⊎ m ⊎ { m } ) = n ′ . – If ( y ) = ( O , m ′ ), then e m ′ −−→ E,χ e ′ and m = { m ′ } ⊎ m ′ . Then wealso have n = ( { e : E } ∪ e : E , { m ′ } ⊎ m ′ ⊎ { m } ) −→ ( { e ′ : E } ∪ e ′ : E , m ′ ⊎ { m } ) = n ′ . • If ( x ) = ( P , m ′ ), then e ′ : E = e : E and m = { m ′ } ⊎ m ′ . Then we alsohave n = ( e : E, { m ′ } ⊎ m ′ ⊎ { m } ) m ′ −−→ ( e : E, m ′ ⊎ { m } ) = n ′ .21 If ( x ) = ( O , m ′ ), where m ′ = ( a, p, p ′ , d ) then e ′ : E = e : E and m ′ = { ( χ ( a ) , p, p ′ , d ) } ⊎ m . Then we also have n = ( e : E, m ⊎ { m } ) m ′ −−→ ( e : E, { ( χ ( a ) , p, p ′ , d ) } ⊎ m ⊎ { m } ) = n ′ . Lemma 3.13. If f is a net and s a trace, then1. s = s ::( l, m )::( O , m ):: s ∈ J f K with witness initial( f ) s −→ n implies s ′ = s ::( O , m )::( l, m ):: s ∈ J f K with initial( f ) s ′ −→ n and2. s = s ::( P , m )::( l, m ):: s ∈ J f K with witness initial( f ) s −→ n implies s ′ = s ::( l, m )::( P , m ):: s ∈ J f K with initial( f ) s ′ −→ n . A special case of this theorem is that if G = ( f, A ) and, for a set of traces S ⊆ P A , S ⊆ J G K holds, then sat ( S ) ⊆ J G K . Proof. s = s ::( l, m )::( O , m ):: s ∈ J f K means that initial ( f ) s −→ ( x ) −−→ ∗ n l,m ) −−−−→ n y ) −−→ ∗ ( O ,m ) −−−−→ n z ) −−→ ∗ s −→ n for net configurations n , n , n , n . For clarity, we take ( x ) , ( y ) , ( z ) to be“names” for the silent transitions. We show that there exist n ′ and ( y ′ )such that initial ( f ) s −→ ( x ) −−→ ∗ n O ,m ) −−−−→ ( l,m ) −−−−→ n ′ y ′ ) −−→ ∗ n z ) −−→ s −→ n by induction on the length of ( y ) −−→ ∗ : • Base case. If ( y ) −−→ ∗ is the identity relation, then assume n l,m ) −−−−→ n O ,m ) −−−−→ n Let n = ( e : E, m ), n = ( e : E, m ), m = ( a, p, p ′ , d ), and m ′ = ( χ ( a ) , p, p ′ , d ). Then n = ( e : E, { m ′ }⊎ m ) by the definitionof −→ . Since ( O , a ) ∈ I , n O ,m ) −−−−→ ( e : E, { m ′ } ⊎ m ). Also, since n l,m ) −−−−→ n we have ( e : E, { m ′ } ⊎ m ) ( l,m ) −−−−→ n by Lemma 3.12.Composing the relations, we get n O ,m ) −−−−→ ( l,m ) −−−−→ n which completes the base case. • Inductive step. If ( y ) −−→ ∗ = ( y ) −−→ ∗ • −→ such that for any n ′ n l,m ) −−−−→ n y ) −−→ ∗ ( O ,m ) −−−−→ n ′ implies that there exist n ′ and ( y ′ ) with n O ,m ) −−−−→ ( l,m ) −−−−→ n ′ y ′ ) −−→ ∗ n ′ n l,m ) −−−−→ n y ) −−→ ∗ n y • −→ n y ( O ,m ) −−−−→ n Let n y = ( e y : E, m y ), n y = ( e y : E, m y ), m = ( a, p, p ′ , d ), and m ′ = ( χ ( a ) , p, p ′ , d ). Then n = ( e y : E, { m ′ }⊎ m y ) by the definitionof −→ . Since ( O , a ) ∈ I , n y ( O ,m ) −−−−→ ( e y : E, { m ′ } ⊎ m y ). Also,since n y • −→ n y we have ( e y : E, { m ′ } ⊎ m y ) • −→ n by Lemma 3.12.Composing the relations, we get n l,m ) −−−−→ n y ) −−→ ∗ n y ( O ,m ) −−−−→ ( e y : E, { m ′ } ⊎ m y ) • −→ n Applying the hypothesis, we finally get n O ,m ) −−−−→ ( l,m ) −−−−→ n ′ y ′ ) −−→ ∗ • −→ n which completes the first part of the proof.2. s = s ::( P , m )::( l, m ):: s ∈ J f K means that initial ( f ) s −→ ( x ) −−→ ∗ n P ,m ) −−−−→ n y ) −−→ ∗ ( l,m ) −−−−→ n z ) −−→ ∗ s −→ n for net configurations n , n , n , n and ( x ) , ( y ) , ( z ) names for the silenttransitions. We show that there exist ( y ′ ) and n ′ such that initial ( f ) s −→ ( x ) −−→ ∗ n y ′ ) −−→ ∗ n ′ l,m ) −−−−→ ( P ,m ) −−−−→ n z ) −−→ ∗ s −→ n by induction on the length of ( y ) −−→ ∗ : • Base case. If ( y ) −−→ ∗ is the identity relation, then assume n P ,m ) −−−−→ n l,m ) −−−−→ n Let n = ( e : E, m ), n = ( e : E, m ), m = ( a, p, p ′ , d ) Then n = ( e : E, { m } ⊎ m ) by the definition of −→ . Since ( P , a ) ∈ I ,( e : E, { m } ⊎ m ) ( P ,m ) −−−−→ n . Also, since n l,m ) −−−−→ n we have n l,m ) −−−−→ ( e : E, { m } ⊎ m ) by Lemma 3.12. Composing the rela-tions, we get n l,m ) −−−−→ ( P ,m ) −−−−→ n which completes the base case. • Inductive step. If ( y ) −−→ ∗ = • −→ ( y ) −−→ ∗ such that for any n ′ n ′ P ,m ) −−−−→ ( y ) −−→ ∗ n l,m ) −−−−→ n implies that there exist n ′ and ( y ′ ) with n ′ y ′ ) −−→ ∗ n ′ l,m ) −−−−→ ( P ,m ) −−−−→ n n P ,m ) −−−−→ n m • −→ n y ( y ) −−→ ∗ n l,m ) −−−−→ n Let n m = ( e m : E, m m ), n y = ( e y : E, m y ), and m = ( a, p, p ′ , d ).Then n = ( e m : E, { m }⊎ m m ) by the definition of −→ . Since ( P , a ) ∈ I , ( e y : E, { m } ⊎ m y ) ( P ,m ) −−−−→ n y . Also, since n m • −→ n y we have n • −→ ( e y : E, { m } ⊎ m y ) by Lemma 3.12. Composing the relations,we get n • −→ ( e y : E, { m } ⊎ m y ) ( P ,m ) −−−−→ n y ( y ) −−→ ∗ n l,m ) −−−−→ n Applying the hypothesis, we finally get n • −→ ( y ′ ) −−→ ∗ n ′ l,m ) −−−−→ ( P ,m ) −−−−→ n which completes the proof. Lemma 3.14. If s, s ′ ∈ P A and s ′ s , then1. enabled( s ) = enabled( s ′ ) ,2. cp( s ) = cp( s ′ ) , and3. fp( s ) = fp( s ′ ) .Proof. Induction on . The base case is trivial. Consider the case where s = s :: α :: α :: s and s ′ = s :: α :: α :: s . Let α = ( l, ( a , p , p ′ , d )) and α =( l, ( a , p , p ′ , d )).1. Induction on the length of s . In the base case, we have (by associativityof ∪ ): enabled ( s :: α :: α ) = enabled ( s ) ∪ { ( a, p ′ ) | a ⊢ A a } ∪ { ( a, p ′ ) | a ⊢ A a } = enabled ( s ) ∪ { ( a, p ′ ) | a ⊢ A a } ∪ { ( a, p ′ ) | a ⊢ A a } .2. Induction on the length of s as in 1.3. Induction on the length of s . In the base case, we have (since by the def.of , p = p ′ and p = p ′ ): fp ( s :: α :: α ) = fp ( s :: α ) ∪ ( { p } \ cp ( s :: α )) = fp ( s ) ∪ ( { p } \ cp ( s )) ∪ ( { p } \ ( cp ( s ) ∪ { p ′ } )) = fp ( s ) ∪ ( { p } \ ( cp ( s ) ∪ { p ′ } )) ∪ ( { p } \ cp ( s )) = fp ( s ) ∪ ( { p } \ cp ( s )) ∪ ( { p } \ ( cp ( s ) ∪ { p ′ } )) = fp ( s :: α ) ∪ ( { p } \ cp ( s :: α )) = fp ( s :: α :: α )24 emma 3.15. Let S ⊆ P A be a saturated set of traces. If s, s ′ ∈ S are tracessuch that s ′ s and s :: α ∈ S , then s ′ :: α ∈ S .Proof. Induction on . The base case is trivial. We show the case of a singleswapping. If s ′ s , we have s = s :: α :: α :: s and s ′ = s :: α :: α :: s for some s , s , α , α . Obviously, s ′ :: α s :: α .We have to show that if s :: α ∈ P A , then s ′ :: α ∈ P A . We have to show that s ′ :: α fulfils the legality conditions imposed by P A : • It is easy to see that s ′ :: α has unique pointers and is correctly labelled. • s ′ :: α is justified since enabled ( s ) = enabled ( s ′ ) by Lemma 3.14. • To see that s ′ :: α strictly scoped, consider the (“worst”) case when( l, ( a, p, p ′ , d )):: s :: α ⊆ s ′ :: α and a ∈ ans A (i.e. we pick the segment that goes right up to the end of the trace). Weconsider the different possibilities of the position of this answer message: – If ( l, ( a, p, p ′ , d )) ⊆ s , then let s ′ = ( l, ( a, p, p ′ , d )):: s ′ :: α :: α :: s :: α ⊆ s ′ :: α and s = ( l, ( a, p, p ′ , d )):: s ′ :: α :: α :: s :: α . We also know that p / ∈ fp ( s ) as s :: α ∈ P A . Now, since s ′ s , we have fp ( s ) = fp ( s ′ )by Lemma 3.14 and thus also p / ∈ fp ( s ′ ). – If ( l, ( a, p, p ′ , d )) = α . We know that p / ∈ fp ( s :: α ) by s :: α ∈ P A .Since s ′ ∈ P A we have p / ∈ fp ( α ) and can so conclude that p / ∈ fp ( α :: s :: α ). – If ( l, ( a, p, p ′ , d )) = α or ( l, ( a, p, p ′ , d )) ⊆ s , p / ∈ fp ( s :: α ) followsimmediately from s ∈ P A . – If ( l, ( a, p, p ′ , d )) = α , p / ∈ fp ( ǫ ) = ∅ is trivially true. • To see that s ′ :: α is strictly nested, assume( l , ( a , p, p ′ , d )):: s ::( l , ( a , p ′ , p ′′ , d )):: s ::( l , ( a , p ′ , p ′′′ , d )) ⊆ s ′ :: α for port names a , a ∈ qst A and a ∈ ans A . We have to show that thisimplies ( l , ( a , p ′′ , − , d )) ⊆ s , for a port name a ∈ ans A . We proceedby considering the possible positions of the last message in the segment: – If ( l , ( a , p ′ , p ′′′ , d )) ⊆ s ′ , then the proof is immediate, by s ′ ∈ P A being strictly nested. – If ( l , ( a , p ′ , p ′′′ , d )) = α we use the fact that s :: α ∈ P A is strictlynested. We assume that the implication (using the same names) asabove holds but instead for s :: α , and show that any swappings thatcan have occurred in s ′ that reorder the a , a , a moves would render s ′ illegal: ∗ If a was moved before a , then s ′ would not be justified. ∗ If a was moved before a , then s ′ would not be justified.As the order is preserved, this shows that the swappings must bedone in a way such that the implication holds for s ′ :: α .25 emma 3.16. For any game net f = ( S, A ) and trace s ∈ P A , s ∈ J f K if andonly if ∀ p ∈ fp( s ) .s ↾ { p } ∈ J f K . Lemma 3.17. cc st,alt A ,π · A ⊆ J CC π, A K .Proof. For convenience, let ( f, A ⇒ A ′ ) = CC π, A , A ′ , S = cc st,alt A , A ′ and S = J f K .We show that s ∈ S implies s ∈ S , by induction on the length of s : • Hypothesis. If s has even length, then initial ( f ) s −→ ( { ( ∅ , h ) : E } , ∅ ) and h is exactly (nothing more than) a copycat heap for s over A ⇒ A ′ . Inother words, there are no threads running and no pending messages andthe heap is precisely specified. • Base case. Trivial. • Inductive step. At any point in the execution of the configuration of f , an O -labelled message can be received, so that case is rather uninteresting.Since the trace s is alternating, we consider two messages in each step:Assume s = s ′ ::( O , ( a , p , p ′ , d ))::( P , ( a , p , p ′ , d )) ∈ S and that s ′ ∈ S . From the definition of cc we know that a = ˜ π A ( a ), p = ˜ π P ( p ), p ′ = ˜ π P ( p ), and d = d .We are given that initial ( f ) s −→ ′ ( { ( ∅ , h ) : E } , ∅ ) as in the hypothesis.We have five cases for the port name a . We show the first three, as theothers are similar. In each case our single engine will receive a messageand start a thread: – If a ∈ ini A ′ , then (since s is justified) p = p ′ and (by the definitionof π ′ A ) a = π − A ( a ). The engine runs the first clause of the copycatdefinition, and chooses to create the pointer p and then performs asend operation. We thus get: initial ( f ) s −→ ( { ( ∅ , h ∪ { p ′ p ′ } ) } , ∅ )It can easily be verified that the hypothesis holds for this new state. – If a ∈ ( opp A ′ ∩ qst A ′ ) \ ini A ′ , then a = π − A ( a ). Since s is justifiedand strictly nested, there is a message ( P , ( a , p , p , d )) ⊆ s ′ thatis pending.By the hypothesis there is a message ( O , ( π ′ A ( a ) , p , p ′ , d )) ⊆ s ′ with h ( p ) = p ′ , which means that the ccq instruction can be run,yielding the following: initial ( f ) s −→ ( { ( ∅ , h ∪ { p ′ p ′ } ) } , ∅ )The hypothesis can easily be verified also in this new state. – If a ∈ opp A ′ ∩ ans A ′ , then a = π − A ( a ). Since s is justified andstrictly nested, there is a prefix s ::( P , ( a , p , p , d )) ≤ s ′ whose lastmessage is a pending question. By the hypothesis s is then on theform s = s ::( O , ( π ′ A ( a ) , ˜ π P ( p ) , ˜ π P ( p ) , d )) with h = h ′ ∪ { p π P ( p ) } , which means that the cca instruction can be run, yieldingthe following: initial ( f ) s −→ ( { ( ∅ , h ′ ) } , ∅ )The hypothesis is still true; the a question is no longer pending andits pointer is removed from the heap (notice that p = ˜ π P ( p )). Theorem 3.18. If s = s :: o :: s ∈ cc A , A ′ and p * s , then s :: p ∈ cc A , A ′ , where o = ( O , ( a, p, p ′ , d )) and p = ( P , (˜ π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )) (i.e. the “copy” of o ).Proof. By induction on . • Base case. This means that s = s :: o :: s ∈ cc alt A , A ′ . But since p * s andby the definition of the alternating copycat, s = ǫ . It is easy to checkthat s :: p ∈ cc alt A , A ′ and that it is legal. • Inductive step. Assume s s ′ for an s ′ ∈ P A ⇒ A ′ such that s ′ :: p ∈ cc A , A ′ .By Lemma 3.15, s :: p ∈ cc A , A ′ . Definition 3.19.
Define the multiset of messages that a net configuration n isready to immediately send as ready( n ) ∆ = { ( P , m ) | ∃ n ′ . n −→ ∗ ( P ,m ) −−−−→ n ′ } . Definition 3.20. If s is a trace, h is a heap, A is a game interface, and π P isa permutation over P , we say that h is a copycat heap for s over A if and onlyif:For every pending P -question from A in s , i.e. ( P , ( a, p, p ′ , d )) ⊆ s ( a ∈ qst A ), h ( p ′ ) = (˜ π P ( p ′ ) , ∅ ) . Lemma 3.21. If s ∈ cc is a trace such that initial( CC ) s −→ n , then the followingholds:1. If n −→ ∗ n ′ then ready( n ) = ready( n ′ ) .2. If n −→ ∗ ( P ,m ) −−−−→ n ′ , then ready( n ) = ready( n ′ ) ∪ { ( P , m ) } . As we are only interested in what is observable, the trace s is thus equivalentto one where silent steps are only taken in one go by one thread right beforeoutputs. Proof.
1. For convenience, we give the composition of silent steps a name, n ( x ) −−→ ∗ n ′ . We proceed by induction on the length of ( x ): • Base case. Immediate. • Inductive step. If n −→ ( x ′ ) −−→ ∗ n ′ , we analyse the first silent step, whichmeans that a thread t of the engine in the net takes a step:27 In the cases where an instruction that does not change or dependon the heap is run, the step cannot affect ready ( n ). – In the case where the instruction is in { cci , ccq , exi , exq } , wenote that the heap is not changed , but merely extended with afresh mapping which can not have appeared earlier in the trace. – If the instruction is cca , since the trace s is strictly nested byassumption, the input message that this message stems fromoccurs in a position in the trace where it would later be illegalto mention the deallocated pointer again.2. Immediate. Theorem 3.22. If s ∈ cc st is a trace such that initial( CC ) s −→ n for an n =( { ( t, h ) : E } , m ) , then there exists a permutation π P over P such that thefollowing holds:1. The heap h is a copycat heap for s over A ⇒ A ′ .2. The set of messages that n can immediately send, ready( n ) , is exactly theset of messages p such that s = s :: o :: s and p * s where the form of o and p is o = ( O , ( a, p, p ′ , d )) and p = ( P , (˜ π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )) (i.e. the“copy” of o ).Proof. Induction on the length of s . The base case is immediate.We need to show that if the theorem holds for a trace s , then it also holdsfor s :: α . We thus assume that there exists a permutation π P such that thehypothesis holds for s and that initial ( CC ) s −→ n −→ ∗ α −→ n ′ .1. If α = ( P , (˜ π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )) then by (2) there must be a message o = ( O , ( a, p, p ′ , d )) such that s = s :: o :: s and α ∈ ready ( n ). Since we“chose” π P such that p can only be gotten from the thread spawned by o , we can proceed by cases as we did Theorem 3.17 to see that the heapstructure is correct in each case.2. • If α = ( P , (˜ π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )) then by (2) there must be a mes-sage o = ( O , ( a, p, p ′ , d )) such that s = s :: o :: s and α ∈ ready ( n ).By Lemma 3.21, ready ( n ) = ready ( n ′ ) ∪ { α } . We can easily verifythat (2) holds for n ′ . • If α = ( O , ( a, p , p ′ , d )), then we can proceed as in Theorem 3.17 tosee that a message p = ( P , (˜ π A ( a ) , p , p ′ , d )) ∈ ready ( n ′ ). We thensimply construct our extended permutation such that the hypothesisholds. 28 .4 Composition The definition of composition in Hyland-Ong games [19] is eerily similar to ourdefinition of trace composition, so we might expect HRAM net composition tocorrespond to it. That is, however, only superficially true: the nominal settingthat we are using [9] brings to light what happens to the justification pointersin composition.If A is an interface, s ∈ traces A and X ⊆ sup ( A ), we define the reindexingdeletion operator s ⇂ X as follows, where ( s ′ , ρ ) = s ⇂ X inductively: ǫ ⇂ X ∆ = ( ǫ, id ) s ::( l, ( a, p, p ′ , d )) ⇂ X ∆ = ( s ′ ::( l, ( a, ρ ( p ) , p ′ , d )) , ρ ) if a / ∈ Xs ::( l, ( a, p, p ′ , d )) ⇂ X ∆ = ( s ′ , ρ ∪ { p ′ ρ ( p ) } ) if a ∈ X We write s ⇂ X for s ′ when s ⇂ X = ( s ′ , ρ ) in the following definition: Definition 3.23.
The game composition of the sets of traces S ⊆ traces A ⇒ B and S ⊆ traces B ′ ⇒ C with π ⊢ B = A B ′ is S ; G S = { s ⇂ B | s ∈ traces A ⊗ B ⊗ C ∧ s ⇂ C ∈ S ∧ π · s ∗ B ⇂ A ∈ S } Clearly we have S ; S = S ; G S for sets of traces S and S , which reinforcesthe practical problem in the beginning of this section.Composition is constructed out of three copycat-like behaviours, as sketched inFig. 6 for a typical play at some types A , B and C . As a trace in the nominalmodel, this is:( q , p , p q , p , p q , p , p q , p , p q , p , p q , p , p a , p a , p a , p a , p a , p a , p almost corresponds to three interleaved copycats as describedabove; between A, B, C and A ′ , B ′ , C ′ . There is, however, a small difference:The move q , if it were to blindly follow the recipe of a copycat, would derefer-ence the pointer p , yielding p , and so incorrectly make the move q justifiedby q , whereas it really should be justified by q as in the diagram. This isprecisely the problem explained at the beginning of this section.To make a pointer extension , when the B -initial move q is performed, it shouldmap p not only to p , but also to the pointer that p points to, which is p (the dotted line in the diagram). When the A-initial move q is performed, ithas access to both of these pointers that p maps to, and can correctly makethe q move by associating it with pointers p and a fresh p .Let A ′ , B ′ , and C ′ be game interfaces such that π A ⊢ A = A A ′ , π B ⊢ B = A B ′ ,29 A B ) ( B ′ C ) ( A ′ C ′ ) q O q P q O q P q O q P a O a P a O a P a O a P ⇒ ⊗ ⇒ → ⇒ Figure 6: Composition from copycat π C ⊢ C = A C ′ , and( A ′ ⇒ A, P A ) = CC exq ,π − A , A ′ ( B ⇒ B ′ , P B ) = CC exi ,π B , B ( C ⇒ C ′ , P C ) = CC cci ,π C , C , where exi ∆ = 0 , ← get
0; 1 ← new , exq ∆ = ∅ , ← get
0; 1 ← new , K A , B , C is: K A , B , C ∆ = (( A ⇒ B ) ⊗ ( B ′ ⇒ C ) ⇒ ( A ′ ⇒ C ′ ) , P A ∪ P B ∪ P C ) . Using the game composition operator K we can define GAM-net compositionusing HRAMnet compact closed combinators. Let f : A ⇒ B , g : B ⇒ C beGAM-nets. Then their composition is defined as f ; GAM g ∆ = Λ − A (Λ A ( f ) ⊗ Λ B ( g )); K A , B , C )) , whereΛ A ( f : A → B ) ∆ = ( η A ; ( id A ∗ ⊗ f )) : I → A ∗ ⊗ B Λ − A ( f : I → A ⊗ B ) ∆ = (( id A ⊗ f ); ( ε A ⊗ id B )) : A → B. Composition is represented diagrammatically as in Fig. 7. Note the comparisonwith the naive composition from Fig. 4. HRAMs f and g are not plugged in di-rectly, although the interfaces match. Composition is mediated by the operator K , which preserves the locality of freshly generated names, exchanging non-localpointer names with local pointer names and storing the mapping between thetwo as copy-cat links, indicated diagrammatically by dotted lines in K .30 ✛ ✲✛✲✲ ✲✲ ✛ gf KBABC CAηη ε Figure 7: Composing GAMs using the K HRAM
Theorem 3.24. If f : A → B and g : B ′ → C are game nets such that π B ⊢ B = A B ′ , f implements S f ⊆ P A ⇒ B , and g implements S g ⊆ P B ′ ⇒ C ,then f ; GAM g implements ( S f ; G S g ) . Definition 3.25. If s is a trace, h is a heap, A is a game interface, and π P isa permutation over P , we say that h is an extended copycat heap for s over A if and only if:1. For every pending P -question non-initial in A in s , i.e. ( P , ( a, p, p ′ , d )) ⊆ s ( a ∈ qst A \ ini A ), h ( p ′ ) = (˜ π P ( p ′ ) , ∅ ) .2. For every pending P -question initial in A in s and its justifying move,i.e. ( O , ( a , p , p, d )):: s ′ ::( P , ( a , p, p , d )) ⊆ s ( a ∈ ini A ), h ( p ) =(˜ π P ( p ) , ˜ π P ( p )) . Theorem 3.26. If f : A → B and g : B ′ → C are game nets such that π B ⊢ B = A B ′ , f implements S f ⊆ P A ⇒ B , and g implements S g ⊆ P B ′ ⇒ C ,then ( S f ; G S g ) st,alt ⊆ AP J f ; GAM g K = J Λ − A (Λ A ( f ) ⊗ Λ B ′ ( g ); K A , B , C ) K .Proof. We show that s ′ ∈ ( S f ; G S g ) st,alt implies that there exists a π P suchthat π A , C · π P · s ′ ∈ J f ; GAM g K = J Λ − A (Λ A ( f ) ⊗ Λ B ′ ( g ); K A , B , C ) K = J Λ A ( f ) K ⊗ J Λ B ′ ( g ) K ; J K A , B , C K . Recall the definition of game composition: S f ; G S g ∆ = { s ⇂ B | s ∈ traces A ⊗ B ⊗ C ∧ s ⇂ C ∈ S f ∧ π B · s ∗ B ⇂ A ∈ S g } We proceed by induction on the length of such an s : • Hypothesis. There exists an s K such that initial ( K A , B , C ) s K −−→ n where n = ( { ( ∅ , h ) : E } , ∅ ) and h is exactly (nothing more than) the union of acopycat heap for s K over A ′ ⇒ A , a copycat heap for s K over C ⇒ C ′ andan extended copycat heap for s K over B ⇒ B ′ .31et s f ∆ = s ⇂ Cs g ∆ = π B · s ∗ B ⇂ As f ; g ∆ = s ⇂ Bs Kf ∆ = s K − A ′ , B ′ , C, C ′ , the part of s K relating to fs Kg ∆ = s K − A, A ′ , B, C ′ , the part of s K relating to gs Kf ; g ∆ = s K − A, B, B ′ , C , the part of s K relating to the whole game net.We require that s K fulfils s ∗ Kf = s f , s ∗ Kg = s g , and s Kf ; g = π A , C · π P · s f ; g . Note that s Kf ; g is the trace of f ; GAM g , by the definition of tracecomposition. • Base case. Immediate. • Inductive step. Assume s = s ′ :: α and that the hypothesis holds for s ′ andsome π ′ P and s ′ K . We proceed by cases on the α message: – If α = ( O , ( a, p, p ′ , d )), we have three cases: ∗ If a ∈ sup ( A ), intuitively this means that we are getting amessage from outside the K engine, and need to propagate itthrough K to f . We construct s K and π P , such that s K = s ′ K ::( O , ( π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )):: α ∗ , by further sub-cases on a ( π P will be determined by steps of the K configuration): · a ∈ ini A cannot be the case because an initial message in A must be justified by an initial ( O -message) in C , and somust be a P -message. · If a ∈ ( qst A \ ini A ) ∪ ans A , this means that s ′ ⇂ C :: α =( s ′ :: α ) ⇂ C as the message must be justified by a messagefrom A . As f is O -closed s ⇂ C ∈ J Λ A ( f ) K . This trace can bestepped to by n ′ just like how it was done in Theorem 3.17.We can verify that the parts of the hypothesis not in thattheorem hold – in particular for this case we have s Kf = s ′ Kf :: α ∗ , so indeed s ∗ Kf = s f as required. ∗ a ∈ sup ( B ):Intuitively this means that g is sending a message to f , whichhas to go through K . We construct s K and π P , such that s K = s ′ K ::( O , ( a, ˜ π P ( p ) , ˜ π P ( p ′ ) , d )):: π B · α ∗ , by further sub-cases on a ( π P will be determined by steps of the K configuration): · If a ∈ ini B , there must be a pending P -message from C justifying α in s ′ , i.e. ( P , ( a , p , ˜ π P ( p ) , d )) ⊆ s ′ and thenby Definition 3.20 h (˜ π P ( p )) = ( p, ∅ ) (as ˜ π P is its own inverse).This means that (running the exi instruction) we get: n ′ ( O , ( π B ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) ,d )) −−−−−−−−−−−−−−−−−→−→ ∗ α ∗ −−→ ( { ( ∅ , h ∪ { p ′ (˜ π P ( p ′ ) , p ) } ) : E } , ∅ ) = n π B · α ∗ is a new pending P -question in the trace that isinitial in B ⇒ B ′ , but our new heap mapping fulfils clause(2) of Definition 3.25 as required. · If a ∈ ( qst B \ ini B ) ∪ ans B , this is similar to the A case(note that the extended copycat only differs from the ordi-nary copycat for initial messages). ∗ If a ∈ sup ( C ).Intuitively this means that we are getting a message from outsidethe K engine, and need to propagate it through K to g . Weconstruct s K and π P , such that: s K = s ′ K ::( O , ( π C ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d )):: α ∗ In this case, the code that we will run is just that of CC , so wecan proceed like in Theorem 3.17, easily verifying our additionalassumptions. – If α = ( P , ( a, p, p ′ , d )), we have three cases: ∗ If a ∈ sup ( A ), intuitively this means that we get a message from f and need to propagate it through K to the outside. By furthersub-cases on a , we construct s K and π P , such that: s K = s ′ K :: α ∗ ::( P , ( π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) , d ))The pointer permutation π P will be determined by steps of the K configuration. · If a ∈ ini A , then α must be justified in s ′ by a pending andinitial P -question from B by the definition of A ⇒ B whichmust in turn be justified by a pending and initial O -questionfrom C by the definition of B ⇒ C . In s ′ K , we have (since s ′ Kf ; g = π A , C · π P · s ′ f ; g ) s ′ K = s ::( O , ( a C ′ , p , p C ′ , d C ′ )):: s ::( P , ( a B , p C ′ , p, d C ′ )):: s This means that clause (2) in Definition 3.25 applies, suchthat h ( p ) = (˜ π P ( p ) , ˜ π P ( p )) and that (running the exq in-struction) we get: n ′ α ∗ −−→−→ ∗ ( P , ( π A ( a ) , ˜ π P ( p ) , ˜ π P ( p ′ ) ,d )) −−−−−−−−−−−−−−−−−→ ( { ( ∅ , h ∪ { ˜ π P ( p ′ ) ( p ′ , d ) } ) : E } , ∅ ) = n Clause (1) of Definition 3.25 applies to these new messagesand trivially holds. · When a ∈ ( qst A \ ini A ) ∪ ans A , the code that we will run isjust that of CC , so we can proceed like in Theorem 3.17, alsoverifying our additional assumptions. ∗ If a ∈ sup ( B ), intuitively this means that f is sending a messageto g , which has to go through K . · a ∈ ini B cannot be the case for a P -message.33 When a ∈ ( qst B \ ini B ) ∪ ans B , the code that we will run isjust that of CC , so we can proceed like in Theorem 3.17, alsoverifying our additional assumptions. ∗ If a ∈ sup ( C ), intuitively this means that we get a message from g and need to propagate it through K to the outside. · a ∈ ini C cannot be the case for a P -message. · When a ∈ ( qst C \ ini C ) ∪ ans C , the code that we will run isjust that of CC , so we can proceed like in Theorem 3.17, alsoverifying our additional assumptions. Lemma 3.27. If f : A → B and g : B ′ → C are game nets such that π B ⊢ B = A B ′ , f implements S f ⊆ P A ⇒ B , and g implements S g ⊆ P B ′ ⇒ C , then J ( f ; GAM g ) K is P -closed with respect to ( S f ; G S g ) .Proof. Similar to Theorems 3.22 and 3.26. We identify the set ready ( n ) with“uncopied” messages of a K net configuration n and show that these are legalaccording to the game composition. Then we show by induction that, assuminga heap as in Theorem 3.26, the ready ( n ) set is precisely those messages. For game interfaces A , A , A and permutations π ij such that π ij ⊢ A i = A A j for i = j ∈ { , , } , we define the family of diagonal engines as: δ π ,π , A = ( A ⇒ A ⊗ A , P ⊗ P ⊗ P )where, for i ∈ { , } , P = { q ccq ; ifzero spark q ) ( spark q ) | q ∈ opp A ∩ qst A ∧ q = π ( q ) ∧ q = π ( q ) }∪ { a cca ; ifzero spark a ) ( spark a ) | a ∈ opp A ∩ ans A ∧ a = π ( a ) ∧ a = π ( a ) } P i ∆ = { q i ← set ( i − cci ; spark q | q i ∈ ini A i ∧ q = π − i ( q i ) }∪ { q i ccq ; spark q | q i ∈ ( opp A i ∩ qst A i ) \ ini A i ∧ q = π − i ( q i ) }∪ { a i cca ; spark a | a i ∈ opp A i ∩ ans A i ∧ a = π − i ( a i ) } . The diagonal is almost identical to the copycat, except that an integer value of0 or 1 is associated, in the heap, with the name of each message arriving on the A and A interfaces (hence the set statements, to be used for routing backmessages arriving on A using ifzero statements). By abuse of notation, wealso write δ for the net singleton ( δ ). Lemma 3.28.
The δ net is the diagonal net, i.e. J δ π ,π , A ; Π i K = J CC π i , A K . Proof.
We show that s ∈ J δ π ,π ; Π K implies s ∈ J CC π , A , A K and the converse(the Π case is analogous), by induction on the trace length. There is a simple34elationship between the heap structures of the respective net configurations— they have the same structure but the diagonal stores additional integers foridentifying what “side” a move comes from. We define a family of GAMs
Fix A with interfaces ( A ⇒ A ) ⇒ A where thereexist permutations π i,j such that π i,j ⊢ A i = A A j for i = j ∈ { , , } . Thefixpoint engine is defined as Fix π ,π , A = Λ − A ( δ π ,π , A ).Let fix π ,π , A : ( A ⇒ π · A ) ⇒ π · A be the game-semantic strategy forfixpoint in Hyland-Ong games [19, p. 364]. Theorem 3.29.
Fix π ,π , A implements fix π ,π , A . The proof of this is immediate considering the three cases of moves from thedefinition of the game-semantic strategy. It is interesting to note here that we“ force ” a HRAM with interface A ⇒ A ⊗ A into a GAM with game interface( A ⇒ A ) ⇒ A , which has underlying interface ( A ⇒ A ) ⇒ A . In the HRAMnet category, which is symmetric compact-closed, the two interfacesare isomorphic (with A ∗ ⊗ A ⊗ A ), but as game interfaces they are not. It israther surprising that we can reuse our diagonal GAMs in such brutal fashion:in the game interface for fixpoint there is a reversed enabling relation between A and A . The reason why this still leads to legal plays only is because the onusof producing the justification pointers in the initial move for A lies with theOpponent, which cannot exploit the fact that the diagonal is “wired illegally”.It only sees the fixpoint interface and must play accordingly. It is fair to saythat that fixpoint interface is more restrictive to the Opponent than the diagonalinterface, because the diagonal interface allows extra behaviours, e.g. sendinginitial messages in A , which are no longer legal. A GAM net for an integer literal n can be defined using the following engine(whose interface corresponds to the ICA exp type). lit n ∆ = ( { ( O , q ) , ( P , a ) } , P ), where P ∆ = { q flip ,
1; 1 ← set ∅ ; 2 ← set n ; spark a } We see that upon getting an input question on port q , this engine will respondwith a legal answer containing n as its value (register 2).The conditional at type exp can be defined using the following engine, with the35onvention that { ( O , q i ) , ( P , a i ) } = exp i . if ∆ =( exp ⇒ exp ⇒ exp ⇒ exp , P ), where P ∆ = { q cci ; spark q ,a cca ; flip , cci ; ifzero spark q ) ( spark q ) ,a cca ; spark a ,a cca ; spark a } We can also define primitive operations, e.g. + : exp ⇒ exp ⇒ exp , in a similarmanner. An interesting engine is that for newvar : newvar ∆ =(( exp ⊗ ( exp ⇒ com ) ⇒ exp ) ⇒ exp , P ) P ∆ = { q ← set cci ; spark q ,q
7→ ∅ , ← get flip ,
1; 1 ← set ∅ ; spark a ,q flip ,
1; 1 ← new , spark q ,a
7→ ∅ , ← get update cca ; spark a ,a cca ; spark a } We see that we store the variable in the second component of the justificationpointer that justifies q , so that it can be accessed in subsequent requests. Aslight problem is that moves in exp will actually not be justified by this pointerwhich we remedy in the q case, by storing a pointer to the pointer with thevariable as the second component of the justifier of q , which means that wecan access and update the variable in a .We can easily extend the HRAMs with new instructions to interpret parallelexecution and semaphores, but we omit them from the current presentation. ICA is PCF extended with constants to facilitate local effects. Its groundtypes are expressions and commands ( exp , com ), with the type of assignablevariables desugared as var ∆ = exp × ( exp → com ). Dereferencing and assignmentare desugared as the first, respectively second, projections from the type ofassignable variables. The local variable binder is new : ( var → com ) → com .ICA also has a type of split binary semaphores sem ∆ = com × com , with thefirst and second projections corresponding to set , get , respectively (see [14] forthe full definition, including the game-semantic model).In this section we give a compilation method for ICA into GAM nets. Thecompilation is compositional on the syntax and it uses the constructs of theprevious section. ICA types are compiled into GAM interfaces which correspondto their game-semantic arenas in the obvious way. We will use A, B, . . . to referto an ICA type and to the GAM interface. Sec. 3 has already developed allthe infrastructure needed to interpret the constants of ICA (Sec. 3.7), including36 δ M ′ eval M Γ K ′ B Figure 8: GAM net for applicationfixpoint (Sec. 3.6). Given an ICA type judgment Γ ⊢ M : A with Γ a list ofvariable-type assignments x i : A i , M a term and A a type, a GAM implementingit G M is defined compositionally on the syntax as follows: G Γ ⊢ MM ′ : A = δ π ,π , Γ ; GAM ( G Γ ⊢ M : A → B ⊗ G Γ ⊢ M ′ : B ); GAM eval
A,B G Γ ⊢ λx : A.M : A → B = Λ A ( G Γ ,x : A ⊢ M : B ) G x : A, Γ ⊢ x : A = Π G A ; CC A,π , Where eval
A,B ∆ = Λ − B ( CC A ⇒ B,π ) for a suitably chosen port renaming π andΠ G A and Π G and Π G are HRAMs with signatures Π G i = ( A ⊗ A ⇒ A , P i )such that they copycat between A and A i and ignore A j = i . The interpretationof function application, which is the most complex, is shown diagrammaticallyin Fig. 8. The copycat connections are shown using dashed lines. Theorem 4.1. If M is an ICA term, G M is the GAM implementing it and σ M its game-semantic strategy then G M implements σ M . The correctness of compilation follows directly from the correctness of the indi-vidual GAM nets and the correctness of GAM composition ;
GAM . Following the recipe in the previous section we can produce an implementationof any ICA term as a GAM net. GAMs are just special-purpose HRAMs, withno special operations. HRAMs, in turn, can easily be implemented on any con-ventional computer with the usual store, control and communication facilities. AGAM net is also just a special-purpose HRAM net, which is a powerful abstrac-tion of communication processes, as it subsumes through the spark instructioncommunication between processes (threads) on the same physical machine orlocated on distinct physical machines and communicating via a point-to-pointnetwork. We have built a prototype compiler based on GAMs by implement-ing them in C, managing processes using standard UNIX threads and physicalnetwork distribution using MPI [17]. Download with source code from http://veritygos.org/gams . ′ M Γ B @ Figure 9: Optimised GAM net for applicationThe actual distribution is achieved using light pragma-like code annotations. Inorder to execute a program at node A but delegate one computation to node B and another computation to node C we simply annotate an ICA program withnode names, e.g.: {new x. x := {f(x)}@B + {g(x)}@C; !x}@A Note that this gives node B , via function f , read-write access to memory location x which is located at node A . Accessing non-local resources is possible, albeitpossibly expensive.Several facts make the compilation process quite remarkable: • It is seamless (in the sense of [8]), allowing distributed compilation wherecommunication is never explicit but always realised through function calls. • It is flexible , allowing any syntactic sub-term to be located at any desig-nated physical location, with no impact on the semantics of the program.The access of non-local resources is always possible, albeit possibly at acost (latency, bandwidth, etc.). • It is dynamic , allowing the relocation of GAMs to different physical nodesat run time. This can be done with extremely low overhead if the GAMheap is empty. • It does not require any form of garbage collection , even on local nodes,although the language combines (ground) state, higher-order functionsand concurrency. This is because a pointer associated with a pointer isnot needed if and only if the question is answered; then it can be safelydeallocated.The current implementation does not perform any optimisations, and the result-ing code is inefficient. Looking at the implementation of application in Fig. 8 itis quite clear that a message entering the GAM net via port A needs to undergofour pointer renamings before reaching the GAM for M . This is the cost wepay for compositionality. However, the particular configuration for applicationcan be significantly simplified using standard peephole optimisation, and we38an reach the much simpler, still correct implementation in Fig. 9. Here thefunctionality of the two compositions, the diagonal, and the eval GAMs havebeen combined and optimised into a single GAM, requiring only one pointer re-naming before reaching M . Other optimisations can be introduced to simplifyGAM nets, in particular to obviate the need for the use of composition GAMs K , for example by showing that composition of first-order closed terms (suchas those used for most constants) can be done directly. In a previous paper we have argued that distributed and heterogeneous pro-gramming would benefit from the existence of architecture-agnostic, seamlesscompilation methods for conventional programming languages which can allowthe programmer to focus on solving algorithmic problems without being over-whelmed by the minutiae of driving complex computational systems [8]. In loc. cit. we give such a compiler for PCF, based directly on the Geometryof Interaction. In this paper we show how Game Semantics can be expressedoperationally using abstract machines very similar to networked conventionalcomputers, a further development of the IAM/JAM game machines. We be-lieve any programming language with a semantic model expressed as Hyland-Ong-style pointer games [19] can be readily represented using GAMs and thencompiled to a variety of platforms such as MPI. Even more promising is thepossible leveraging of more powerful infrastructure for distributed computingthat can mask much of the complexities of distributed programming, such asfault-tolerance [23].The compositional nature of the compiler is very important because it givesrise to a very general notion of foreign-function interface, expressible both ascontrol and as communication, which allows a program to interface with otherprograms, in a syntax-independent way (see [13] for a discussion), opening thedoor to the seamless development of heterogeneous open systems in a distributedsetting.We believe we have established a solid foundational platform on which to buildrealistic seamless distributed compilers. Further work is needed in optimisingthe output of the compiler which is currently, as discussed, inefficient. Thesources of inefficiency in this compiler are not just the generation of heavy-dutyplumbing, but also the possibly unwise assignment of computation to nodes,requiring excessive network communication. Previous work in game semanticsfor resource usage can be naturally adapted to the operational setting of theGAMs and facilitate the automation of optimised task assignment [11].
References [1] S. Abramsky, R. Jagadeesan, and P. Malacaria. Full Abstraction for PCF.
Inf. Comput. , 163(2):409–470, 2000.[2] N. Benton. Embedded interpreters.
J. Funct. Program. , 15(4):503–542,2005. 393] G. Berry and G. Boudol. The Chemical Abstract Machine. In
Confer-ence Record of the Seventeenth Annual ACM Symposium on Principles ofProgramming Languages, San Francisco, California, USA, January 1990 ,pages 81–94. ACM Press, 1990.[4] P.-L. Curien and H. Herbelin. Abstract machines for dialogue games.
CoRR , abs/0706.2544, 2007.[5] V. Danos, H. Herbelin, and L. Regnier. Game Semantics & Abstract Ma-chines. In
Proceedings, 11th Annual IEEE Symposium on Logic in Com-puter Science, New Brunswick, New Jersey, USA, July 27-30, 1996 , pages394–405. IEEE Computer Society, 1996.[6] V. Danos and L. Regnier. Reversible, Irreversible and Optimal lambda-Machines.
Theor. Comput. Sci. , 227(1-2):79–97, 1999.[7] C. Faggian and F. Maurel. Ludics Nets, a game Model of ConcurrentInteraction. In , pages 376–385.IEEE Computer Society, 2005.[8] O. Fredriksson and D. R. Ghica. Seamless distributed computing from thegeometry of interaction. In
Trustworthy Global Computing , 2012. forth-coming.[9] M. Gabbay and D. R. Ghica. Game Semantics in the Nominal Model.
Electr. Notes Theor. Comput. Sci. , 286:173–189, 2012.[10] M. Gabbay and A. M. Pitts. A New Approach to Abstract Syntax InvolvingBinders. In , pages 214–224. IEEE Computer Society, 1999.[11] D. R. Ghica. Slot games: a quantitative model of computation. In
Pro-ceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principlesof Programming Languages, POPL 2005, Long Beach, California, USA,January 12-14, 2005 , pages 85–97. ACM, 2005.[12] D. R. Ghica. Applications of Game Semantics: From Program Analysis toHardware Synthesis. In
Proceedings of the 24th Annual IEEE Symposiumon Logic in Computer Science, LICS 2009, 11-14 August 2009, Los Angeles,CA, USA , pages 17–26. IEEE Computer Society, 2009.[13] D. R. Ghica. Function interface models for hardware compilation. In , pages131–142. IEEE, 2011.[14] D. R. Ghica and A. S. Murawski. Angelic semantics of fine-grained con-currency.
Ann. Pure Appl. Logic , 151(2-3):89–114, 2008.[15] D. R. Ghica and N. Tzevelekos. A System-Level Game Semantics.
Electr.Notes Theor. Comput. Sci. , 286:191–211, 2012.[16] J.-Y. Girard. Geometry of interaction 1: Interpretation of System F.
Stud-ies in Logic and the Foundations of Mathematics , 127:221–260, 1989.4017] W. Gropp, E. Lusk, and A. Skjellum.
Using MPI: portable parallel pro-gramming with the message passing interface , volume 1. MIT press, 1999.[18] J. M. E. Hyland and C.-H. L. Ong. Pi-Calculus, Dialogue Games and PCF.In
FPCA , pages 96–107, 1995.[19] J. M. E. Hyland and C.-H. L. Ong. On Full Abstraction for PCF: I, II, andIII.
Inf. Comput. , 163(2):285–408, 2000.[20] J. Laird. Exceptions, Continuations and Macro-expressiveness. In
Program-ming Languages and Systems, 11th European Symposium on Programming,ESOP 2002, held as Part of the Joint European Conference on Theory andPractice of Software, ETAPS 2002, Grenoble, France, April 8-12, 2002,Proceedings , pages 133–146. Springer, 2002.[21] I. Mackie. The Geometry of Interaction Machine. In
Conference Recordof POPL’95: 22nd ACM SIGPLAN-SIGACT Symposium on Principles ofProgramming Languages, San Francisco, California, USA, January 23-25,1995 , pages 198–208. ACM Press, 1995.[22] R. Milner. Functions as Processes. In
Automata, Languages and Pro-gramming, 17th International Colloquium, ICALP90, Warwick University,England, July 16-20, 1990, Proceedings , pages 167–180. Springer, 1990.[23] D. Murray, M. Schwarzkopf, C. Smowton, S. Smith, A. Madhavapeddy,and S. Hand. Ciel: a universal execution engine for distributed data-flowcomputing. 2011.[24] C.-H. L. Ong. Verification of Higher-Order Computation: A Game-Semantic Approach. In