Probabilistic Analysis of Binary Sessions
Omar Inverso, Hernán Melgratti, Luca Padovani, Catia Trubiani, Emilio Tuosto
PProbabilistic Analysis of Binary Sessions
Omar Inverso
Gran Sasso Science Institute, Italy
Hernán Melgratti
ICC – Universidad de Buenos Aires – Conicet, Argentina
Luca Padovani
Università di Torino, Italy
Catia Trubiani
Gran Sasso Science Institute, Italy
Emilio Tuosto
Gran Sasso Science Institute, Italy
Abstract
We study a probabilistic variant of binary session types that relate to a class of Finite-State MarkovChains. The probability annotations in session types enable the reasoning on the probability that asession terminates successfully, for some user-definable notion of successful termination. We developa type system for a simple session calculus featuring probabilistic choices and show that the successprobability of well-typed processes agrees with that of the sessions they use. To this aim, the typesystem needs to track the propagation of probabilistic choices across different sessions.
Theory of computation → Type structures
Keywords and phrases
Probabilistic choices; session types; static analysis; deadlock freedom.
Digital Object Identifier
Funding
Omar Inverso has been partially supported by MIUR project PRIN 2017FTXR7S
ITMATTERS (Methods and Tools for Trustworthy Smart Systems). Catia Trubiani has been partiallysupported by MIUR project PRIN 2017TWRCNB
SEDUCE (Designing Spatially DistributedCyber-Physical Systems under Uncertainty). Hernán Melgratti, Luca Padovani and Emilio Tuostohave been partially supported by EU H2020 RISE programme under the Marie Skłodowska-Curiegrant agreement No 778233. Hernán Melgratti has been partially supported by UBACyT projects20020170100544BA and 20020170100086BA and PIP project 11220130100148CO.
Acknowledgements
The authors are grateful to the anonymous reviewers for their detailed feedback.
Session types [29, 30] have consolidated as a formalism for the modular analysis of complexsystems of communicating processes. A session is a private channel connecting two (sometimesmore) processes, each owning one endpoint of the session and using the endpoint accordingto a specification – the session type – that constrains the sequence of messages that can besent and received through that endpoint. As an example, the session type! int . ( ◦ & ? int . ( ◦ ⊕ T )) (1.1)could describe (part of) an auction protocol as seen from the viewpoint of a buyer process,which sends a bid (! int ) and waits for a decision from the auctioneer. The protocol proceeds intwo different ways, as specified by the two sides of the branching operator &. The auctioneermay declare that the item is sold, in which case the session terminates immediately ( ◦ ), or it © Omar Inverso and Hernán Melgratti and Luca Padovani and Catia Trubiani and Emilio Tuosto;licensed under Creative Commons License CC-BY31st International Conference on Concurrency Theory (CONCUR 2020).Editors: Igor Konnov and Laura Kovács; Article No. 36; pp. 36:1–36:32Leibniz International Proceedings in InformaticsSchloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany a r X i v : . [ c s . L O ] J u l may inform the buyer of a different (higher) bid (? int ). At that point the buyer may choose( ⊕ ) to quit the auction or to restart the same protocol, here denoted by T , with another bid.Most session type theories are aimed at enforcing qualitative properties of a system,such as type safety, protocol compliance, deadlock and livelock freedom, and so on [30]. Inthese theories, branches (&) and choices ( ⊕ ) are given a non-deterministic interpretationsince all that matters is understanding whether the system “behaves well” no matter how itevolves. In this work, we propose a session type system for a particular quantitative analysis of session-based networks of communicating processes. More specifically, we shift from anon-deterministic to a probabilistic interpretation of branches and choices in session typesand study a type system aimed at determining the probability with which a particular sessionterminates successfully . Since there is no universal interpretation of “successful termination”,we differentiate successful from unsuccessful termination of a session by means of a dedicatedtype constructor. For example, in our type system we can refine (1.1) as! int . ( • p & ? int . ( ◦ q ⊕ T )) (1.2)where the session type • indicates successful termination and branches and choices areannotated with probabilities p and q . In particular, the auctioneer declares the item soldwith probability p and answers with a counteroffer with probability 1 − p , whereas the buyerdecides to quit the auction with probability q and to bid again with probability 1 − q .From an abstract description such as (1.2), we can easily compute the probability thatthe interaction ends up in a particular state ( e.g. , the probability with which the buyer winsthe auction). However, (1.2) is “just” the type of one endpoint of a single session in a system,while the system itself could be much more complex: there could be many different processesinvolved, each making probabilistic choices affecting the behavior of faraway processes thatdirectly or indirectly receive information about such choices through messages exchanged insessions. Also, new processes and sessions could be created and the network topology couldevolve dynamically as the system runs. How do we know that (1.2) is a faithful abstractionof our system? How do we know that the probability annotations we see in (1.2) correspondto the actual probabilities that the system evolves in a certain way? Here is where our typesystem comes into play: by certifying that a system of processes is well typed with respect toa given set of session types with probability annotations, we support the computation of theprobability that the system evolves in certain way statically – i.e. , before the system runs –and solely looking at the session types we are interested in as opposed to the system itself. Summary of contributions and structure of the paper.
We define a session calculus inwhich processes may perform probabilistic choices (Section 2). We study a variant of sessiontypes based on a probabilistic interpretation of branches and choices so that session typescorrespond to a particular class of Discrete-Time Markov Chains (Section 3). We providesyntax-directed typing rules for relating processes and session types (Section 4). Well-typedprocesses are shown to behave probabilistically as specified by the corresponding sessiontypes. We are able to trace this correspondence not just for finite processes (Theorem 4.8)but also for processes engaged in potentially infinite interactions (Corollary 4.9). We discussrelated work in Section 5 and ideas for further developments in Section 6. Example detailsand proofs of all the results are relegated to the appendices.
We let p , q and r range over probabilities , namely real numbers in the range [0 , x , y and z range over an infinite set N of channel names . We write x for finite sequences of . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:3 Domains p, q, r ∈ [0 ,
1] probability x, y, z ∈ N name
Processes
P, Q ::= idle inaction | done x success | x ?( y ) .P message input | x ! y.P message output | case x [ P, Q ] branch | inl x.P left selection | inr x.P right selection | P | Q parallel composition | ( x ) P session restriction | P p (cid:1) Q probabilistic choice | A h x i process invocation Table 1
Syntax of processes. names and other entities. Processes, ranged over by P , Q and R , are defined by the grammarin Table 1. We have two distinct terms, idle and done x , for modeling inactive processes. Weuse idle to denote plain termination and done x to denote successful termination of session x . This way, we are able to relate the success rate resulting from processes to that inferrablefrom session types (Theorem 4.8). The terms x ?( y ) .P and x ! y.P denote a process thatrespectively performs an input and an output of a message y on session x and then continuesas P . For simplicity, in the model we only consider messages that are themselves (session)channels, while in some examples we will also use more elaborate message types. The term case x [ P, Q ] represents a process that waits for a selection (either “left” or “right”) on session x and continues as either P or Q accordingly. The terms inl x.P and inr x.P representprocesses that perform a selection (respectively “left” and “right”) on session x and continueas P . Parallel composition P | Q , channel restriction ( x ) P and process invocation A h x i are standard. We assume that for every process variable A there is an equation A ( x ) := P defining it. Finally, the term P p (cid:1) Q represents a process that has performed a probabilisticchoice and that behaves as P with probability p and as Q with probability 1 − p .The notions of free and bound names are standard. In the following, we write fn ( P ) and bn ( P ) for the set of free and bound names of P , respectively. For the sake of readability, weoccasionally omit idle terms and we assume that input/output prefixes and selections bindmore tightly than choices and parallel compositions. So for example, inl x. done y p (cid:1) inr x is to be read ( inl x. done y ) p (cid:1) ( inr x. idle ).The operational semantics of processes is given by a structural precongruence relation (cid:52) and a reduction relation → , which are defined by the axioms and rules in Table 2 wherewe abbreviate with P ≡ Q the two relations P (cid:52) Q and Q (cid:52) P . We use a pre-congruenceinstead of a symmetric relation because careless rewriting of processes may compromise theirwell typing. Nonetheless, the use of a pre-congruence does not affect the ability of processesto reduce ( cf. Theorem 4.5) and most relations are symmetric anyway. We now describe thestructural pre-congruence and reduction, focusing on the former relation since it is the onlyone that deals with probabilistic choices.The relations described by s-par-comm , s-new-comm and s-par-new are standard and needno commentary. Axiom s-choice-comm allows us to commute a probabilistic choice. Theprobability needs to be suitably adjusted so as to preserve the semantics of the process. Axiom s-no-choice turns a probabilistic choice into a deterministic one when the probability is trivial.This axiom is the main motivation for adopting a pre-congruence rather than a symmetricrelation. Indeed, while the symmetric relation P (cid:52) P (cid:1) Q makes sense operationally, itviolates typing in general for the process Q can be arbitrary. On the contrary, knowingthat P (cid:1) Q is well typed allows us to easily derive that P alone is also well typed. Axiom s-choice-idem states that the probabilistic choice is idempotent, namely that a probabilisticchoice between equal behaviors is not really a choice. Rule s-choice-assoc expresses the C O N C U R 2 0 2 0
Structural pre-congruence P (cid:52) Q s-no-choice P (cid:1) Q (cid:52) P s-choice-idem P p (cid:1) P ≡ P s-choice-comm P p (cid:1) Q ≡ Q − p (cid:1) P s-new-comm ( x )( y ) P ≡ ( y )( x ) P s-par-comm P | Q ≡ Q | P s-par-choice ( P p (cid:1) Q ) | R (cid:52) ( P | R ) p (cid:1) ( Q | R ) s-par-new x fn ( Q )( x ) P | Q ≡ ( x )( P | Q ) s-choice-assoc pq < P q (cid:1) Q ) p (cid:1) R ≡ P pq (cid:1) ( Q p − pq − pq (cid:1) R ) s-par-assoc fn ( Q ) ∩ fn ( R ) = ∅ ( P | Q ) | R (cid:52) P | ( Q | R ) Reduction P → Q r-com x ! y.P | x ?( y ) .Q → P | Q r-left inl x.P | case x [ Q, R ] → P | Q r-var A ( x ) := PA h x i → P r-par P → QP | R → Q | R r-new P → Q ( x ) P → ( x ) Q r-choice P → QP p (cid:1) R → Q p (cid:1) R r-struct P (cid:52) R → R (cid:52) QP → Q Table 2
Structural pre-congruence and reduction of processes. standard associativity property for probabilistic choices, which requires a normalization ofthe involved probabilities. Note that this rule is applicable only when pq <
1, or else therightmost probability in the conclusion would be undefined. When pq = 1, the process canbe simplified using s-no-choice . Rule s-par-assoc expresses the associativity property for theparallel composition. The side condition, requiring the middle and rightmost processes to beconnected by one shared name, is needed by the type system ( cf. Section 4). The readermight be worried by the side conditions imposed on the associativity rule, since they arelimiting the ability to rewrite processes to an extent which could prevent processes to beplaced next to each other and reduce according to the reduction relation. It is possible toprove a proximity property (Lemma D.5) ensuring that this is not the case, namely that it isalways possible to rearrange (well-typed) processes in such a way that processes connectedby a session can communicate. The symmetric relation P | ( Q | R ) (cid:52) ( P | Q ) | R when fn ( P ) ∩ fn ( Q ) = ∅ is derivable using s-par-assoc and s-par-comm . Rule s-par-choice distributesparallel compositions over probabilistic choices. This rule is pivotal in our model, for twodifferent reasons. First, being able to distribute a process over a probabilistic choice isessential to make sure that processes connected by a session can be placed next to each otherso that they can reduce according to → . Second, the relation is quite challenging to handleat the typing level: when R is composed in parallel with P and Q , it might be necessary totype R differently depending on whether or not the session that connects R with P and Q isaffected by the probabilistic choice. This is doable provided that R uses the session safely ,namely if it does not delegate the session before it becomes aware of the probabilistic choice( cf. Section 4).The reduction relation is standard. The base cases consist of the usual rules for commu- . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:5 nication ( r-com ), branch selection ( r-left and r-right , the latter omitted) along with theexpansion of process variables ( r-var ). Reduction is closed under parallel compositions ( r-par ),restrictions ( r-new ), probabilistic choices ( r-choice ) and structural precongruence ( r-struct ).Note that a probabilistic choice P p (cid:1) Q is persistent , in the sense that neither P nor Q isdiscarded by reduction even though they morally represent two mutually-exclusive evolutionsof the same process. This is one of the standard approaches for describing the semanticsof probabilistic processes [28, 53, 37]. As a consequence, a process like A := idle . (cid:1) A diverges but terminates with probability 1. We will be able to state interesting properties ofsuch processes through a soundness result that is relativized to the probability of termination.We write ⇒ for the reflexive, transitive closure of → , we write P → if there exists Q such that P → Q and P (cid:88) → if not P → . In the above example, A ⇒ P implies P → . (cid:73) Example 2.1 (Auction) . We end this section showing how to represent in our calculus theauction example informally described in Section 1. We define two processes, a
Buyer and a
Seller connected by a session x : Buyer ( x ) := x ! bid . case x [ done x, x ?( y ) . ( inl x q (cid:1) inr x. Buyer h x i )] Seller ( x ) := x ?( z ) . ( inl x. done x p (cid:1) inr x.x ! counteroffer . case x [ idle , Seller h x i ])The buyer sends the current bid on x and waits for a reaction from the seller. The selleraccepts the bid with probability p and rejects it with probability 1 − p . If the seller accepts(by selecting the left branch of the session), the buyer terminates successfully. Otherwise,the seller proposes a counteroffer, which the buyer rejects with probability q and acceptswith probability 1 − q . In the first case, the session terminates without satisfaction of thebuyer. In the second case, the buyer starts a new negotiation. (cid:4) Session types.
Probabilistic session types describe communication protocols taking placethrough session endpoints and their (finite) syntax is given by the following grammar:
Session type
T, S ::= ◦ | • | ? t.T | ! t.T | T p & S | T p ⊕ S (3.1)The session types ◦ and • describe a session endpoint on which no further input/outputoperations are possible. We use • to mark those termination points of a protocol thatrepresent success and that we target in our probabilistic analysis. The precise meaning of“successful termination” is domain specific but also irrelevant in the technical developmentthat follows. The session types ? t.T and ! t.T describe session endpoints used for receiving(respectively, sending) a message of type t and then according to T . Types will be discussedshortly. The session types T p & S and T p ⊕ S describe a session endpoint used for receiving(respectively, sending) a binary choice which is “left” with probability p and “right” withprobability 1 − p . The endpoint is then used according to T or S , respectively. Note that p ⊕ is an internal choice – the process behaving according to this type internally chooses either“left” or “right” – whereas p & is an external choice – the process behaving according to thistype externally offers behaviors corresponding to both choices. Therefore, the probabilityannotation in p & is completely determined by the one in the corresponding internal choiceand it could be argued that it is somewhat superfluous. Nonetheless, as we will see whendiscussing the typing rule for branch processes, having direct access to this annotation makesit easy to propagate the probability of choices across different sessions.We do not use any special syntax for specifying infinite session types. Rather, we interpretthe productions for T coinductively and we call session types the possibly infinite treesgenerated by the productions in (3.1) that satisfy the following conditions: C O N C U R 2 0 2 0
Regularity
We require every tree to consist of finitely many distinct subtrees. This condi-tion ensures that session types are finitely representable either using the so-called “ µ notation” [47] or as solutions of finite sets of equations [16]. Reachability
We require every subtree T of a session type to contain a reachable leaf labelledby ◦ or • . This condition ensures that it is always possible to terminate a session regardlessof how long it has been running.To formalize these conditions, we define a relation T (cid:32) p S modeling the fact that (thebehavior described by) T may evolve into S with probability p in a single step: ◦ (cid:32) ◦• (cid:32) • ? t.T (cid:32) T ! t.T (cid:32) T T p & S (cid:32) p TT p ⊕ S (cid:32) p T T p & S (cid:32) − p ST p ⊕ S (cid:32) − p S We also consider the relation (cid:32) ∗ p , which accounts for multiple steps in the expected way: T (cid:32) ∗ T T (cid:32) p ST (cid:32) ∗ p S T (cid:32) ∗ p T T (cid:32) ∗ q ST (cid:32) ∗ pq S Roughly speaking, (cid:32) ∗ p is the reflexive, transitive closure of (cid:32) p except that the probabilityannotation p accounts for the cumulative transition probability between two session types. (cid:73) Definition 3.1 (well-formed session type) . Let T ( T ) def = { S | ∃ p, S : T (cid:32) ∗ p S } . A (possiblyinfinite) tree T generated by the productions in (3.1) is a well-formed session type if T ( T ) isfinite and, for every S ∈ T ( T ) , there exists p > such that either S (cid:32) ∗ p ◦ or S (cid:32) ∗ p • . (cid:73) Example 3.2 (auction protocol, buyer side) . Even though we have not presented the typingrules for the calculus of Section 2, we can speculate on the session type of the endpoint used e.g. , by the buyer process in Example 2.1, which satisfies the equation T = ! int . ( • p & ? int . ( ◦ q ⊕ T ))In this case we have T ( T ) = {◦ , • , • p & (? int . ( ◦ q ⊕ T )) , ? int . ( ◦ q ⊕ T ) , ◦ q ⊕ T, T } and itis easy to see that T is well formed provided that at least one among p and q is positive. (cid:4) From now on we assume that all the session types we work with are well formed.
Success probability.
We now define the probability that a protocol described by a sessiontype T terminates successfully. Intuitively, this probability is computed by accounting for allpaths in the structure of T that lead to a leaf labelled by • . Formally: (cid:73) Definition 3.3 (success probability) . The success probability of a session type T , denotedby (cid:74) T (cid:75) , is determined by the following equations: (cid:74) ◦ (cid:75) = 0 (cid:74) • (cid:75) = 1 (cid:74) ? t.T (cid:75) = (cid:74) T (cid:75)(cid:74) ! t.T (cid:75) = (cid:74) T (cid:75) (cid:74) T p & S (cid:75) = p (cid:74) T (cid:75) + (1 − p ) (cid:74) S (cid:75)(cid:74) T p ⊕ S (cid:75) = p (cid:74) T (cid:75) + (1 − p ) (cid:74) S (cid:75) For a finite session type T , Definition 3.3 gives a straightforward recursive algorithm forcomputing (cid:74) T (cid:75) . When T is infinite, however, it is less obvious that Definition 3.3 provides away for determining (cid:74) T (cid:75) . To address the problem in the general case we observe that, byinterpreting (cid:74) T (cid:75) as a probability variable , Definition 3.3 allows us to derive a finite systemof equations relating such variables. Indeed, the right hand side of each equation for (cid:74) T (cid:75) inDefinition 3.3 is expressed in terms of probability variables corresponding to the childrennodes in the tree of T . Since T has finitely many subtrees, we end up with finitely many . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:7 equations. Then, we observe that every session type T corresponds to a Discrete-TimeMarkov Chain (DTMC) [33, 48] whose state space is T ( T ) = { S , . . . , S n } and such that theprobability p ij of performing a transition from state S i to state S j is given by p ij def = ( p if S i (cid:32) p S j T isfinite state and absorbing. That is, it is always possible to reach an absorbing state (either ◦ or • ) from any transient state (any other session type). In any finite-state, absorbingDTMC, the probability of reaching a specific absorbing state from any transient state can becomputed by solving a particular system of equations which is guaranteed to have a uniquesolution [33]. Moreover, the system that we obtain for (cid:74) T (cid:75) using Definition 3.3 is preciselythe one whose solution is the probability of reaching • from T (see Appendix A). (cid:73) Example 3.4.
We compute the success probability of T from Example 3.2 where, for thesake of illustration, we take p = and q = . Let T = • & T and T = ? int .T and T = ◦ ⊕ T be convenient names for some of its subtrees. Using Definition 3.3 we obtainthe system of equations (cid:74) T (cid:75) = (cid:74) T (cid:75)(cid:74) T (cid:75) = (cid:74) • (cid:75) + (cid:74) T (cid:75) (cid:74) • (cid:75) = 1 (cid:74) T (cid:75) = (cid:74) T (cid:75) (cid:74) T (cid:75) = (cid:74) ◦ (cid:75) + (cid:74) T (cid:75)(cid:74) ◦ (cid:75) = 0from which we compute (cid:74) T (cid:75) = (Appendix A details the corresponding DTMC). (cid:4) Duality.
We write T for the dual of T , that is the session type obtained from T by swappinginput actions with output actions and leaving the remaining forms unchanged. Formally, T is the session type obtained from T that satisfies the following equations: ◦ = ◦• = • ? t.T = ! t.T ! t.T = ? t.T T p & S = T p ⊕ ST p ⊕ S = T p & S It is easy to see that duality is an involution (that is, T = T ) and that the successprobability is unaffected by duality, that is (cid:74) T (cid:75) = (cid:74) T (cid:75) . This means that we can compute thesuccess probability of a session from either of its two endpoints. Types.
Types describe resources used by processes and exchanged as messages. We distin-guish between session endpoints, whose type is a session type T , from sessions with successprobability p , whose type has the form h p i : Type t, s ::= T | h p i (3.2)We will see in Section 4 that a type h p i results from “joining” the two peer endpoints ofa session having dual sessions types T and T such that p = (cid:74) T (cid:75) = (cid:74) T (cid:75) . For brevity we omitmessage types such as unit and int from the formal development as their handling is folkloreand does not affect the presented results. We occasionally use them in the examples though.A key aspect of the type system is that processes may use session endpoints differently,depending on the outcome of probabilistic choices. Nonetheless, we need to capture the overalleffect of such different uses in a single type. For this reason, we introduce a probabilistictype combinator that allows us to combine types by weighing the different ways in which aresource is used according to a given probability. C O N C U R 2 0 2 0 (cid:73)
Definition 3.5 (probabilistic type combination) . We write t p (cid:1) s for the combination of t and s weighed by p , which is defined by cases on the form of t and s as follows: t p (cid:1) s def = t if t = sT pq +(1 − p ) r ⊕ S if t = T q ⊕ S and s = T r ⊕ S h pq + (1 − p ) r i if t = h q i and s = h r i undefined otherwise Intuitively, t p (cid:1) s describes a resource that is used according to t with probability p andaccording to s with probability 1 − p . The combination of t and s is only defined when t and s have “compatible shapes”, the trivial case being when they are the same type. Theinteresting cases are when t and s describe a choice (a point of the protocol where oneprocess performs a selection) and when t and s describe a session as a whole. In both cases,the success probability of the choice (respectively, of the session) is weighed by p . As anexample, consider a session endpoint that is used according to T ⊕ S with probability p and according to T ⊕ S with probability 1 − p . In the first case, we are certain that thesession endpoint is used for selecting “left” and then according to T . In the second case, weare certain that the session endpoint is used for selecting “right” and then according to S .Overall, the session endpoint is used according to the type T p ⊕ S .The combination h q i p (cid:1) h r i = h pq +(1 − p ) r i captures the fact that the success probabilityof a whole session that is carried out in two different ways having success probabilitiesrespectively q and r is the convex sum of q and r weighed by p . The success probabilitywith which we annotate this type allows us to state the soundness properties of the typesystem, by relating the success probabilities in session types with those of a process thatbehaves according to those session types. Speaking of success probability, a fundamentalproperty that is used extensively in the soundness proofs is the following one. Any conceivablegeneralization of Definition 3.5 must guarantee this property for the type system to be sound. (cid:73) Proposition 3.6. (cid:74) T p (cid:1) T (cid:75) = p (cid:74) T (cid:75) + (1 − p ) (cid:74) T (cid:75) . Definition 3.5 is quite conservative in that, except for top-level choices, any other sessiontype can only be combined with itself. It is conceivable to generalize p (cid:1) to permit thecombination of “deep choices” found after a common prefix. For example, we could have! int . ( T ⊕ S ) p (cid:1) ! int . ( T ⊕ S ) = ! int . ( T p ⊕ S ). This generalization is not for free, though.As we will see in Section 4, session endpoints that are affected by a probabilistic choice mustbe “handled with care” and Definition 3.5 as it stands helps ensuring that this is actuallythe case. We leave the combination of “deep choices” to future work. We use contexts for tracking the type of free variables occurring in processes. A context is afinite map from variables to types written x : t , . . . , x n : t n . We let Γ and ∆ range overcontexts, we write ∅ for the empty context, dom (Γ) for the domain of Γ and Γ , ∆ for theunion of Γ and ∆ when dom (Γ) ∩ dom (∆) = ∅ . We also extend p (cid:1) pointwise to contexts inthe obvious way.Before we discuss the typing rules, we have to introduce two predicates to single outtypes that have particular properties. The class of unrestricted types , defined next, is aimedat describing resources that can be discarded and duplicated at will. (cid:73) Definition 4.1 (unrestricted type and context) . We say that t is unrestricted and we write un ( t ) if t = ◦ . We write un (Γ) if un (Γ( x )) for all x ∈ dom (Γ) . . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:9 t-idle un (Γ)Γ ‘ idle t-done un (Γ)Γ , x : • ‘ done x t-var un (Γ) A : t safe ( t )Γ , x : t ‘ A h x i t-in Γ , x : T, y : t ‘ P Γ , x : ? t.T ‘ x ?( y ) .P t-branch Γ , x : T ‘ P ∆ , x : S ‘ Q Γ p (cid:1) ∆ , x : T p & S ‘ case x [ P, Q ] t-out Γ , x : T ‘ P safe ( t )Γ , x : ! t.T, y : t ‘ x ! y.P t-left Γ , x : T ‘ P Γ , x : T ⊕ S ‘ inl x.P t-right Γ , x : S ‘ P Γ , x : T ⊕ S ‘ inr x.P t-par Γ , x : T ‘ P ∆ , x : T ‘ Q Γ , ∆ , x : h (cid:74) T (cid:75) i ‘ P | Q t-choice Γ ‘ P ∆ ‘ Q Γ p (cid:1) ∆ ‘ P p (cid:1) Q t-new Γ , x : h p i ‘ P Γ ‘ ( x ) P Table 3
Typing rules.
In our case, the only unrestricted type is ◦ , but if the type system is extended with basictypes such as unit and int , these would be unrestricted as well. Next we introduce the class of safe types , those describing resources that can be safely sent in messages and used in processinvocations because they cannot be passively affected by a probabilistic choice. (cid:73) Definition 4.2 (safe type) . We write safe ( t ) if t is not of the form T p & S . The ultimate motivation for the safety predicate has its roots in the soundness proof ofthe type system. In a nutshell, an unsafe session type is one whose dual admits a non-trivialprobabilistic combination (Definition 3.5) and therefore that may change unpredictably, fromthe standpoint of a process using a resource with that (unsafe) type. In this case, the processmust wait to be notified of the (probabilistic) choice that has occurred before using theresource in a message. Should the need arise to send an unsafe endpoint in a message, it ispossible to patch the endpoint’s session type so as to make it safe, for example by prefixingthe session type with a dummy input/output action. We will see an instance where thispatch is necessary in Example 4.12.Judgments have the form Γ ‘ P , meaning that P is well typed in Γ, and are derivedby the rules in Table 3. We assume a global map from process variables to sequences oftypes written { A i : t i } i ∈ I whose domain includes all the process variables for which thereis a definition A i ( x ) := P i and that x : t ‘ P i is derivable for every i ∈ I . This ensuresthat all process definitions are typed consistently. The typing rules are syntax directed,so that each process form corresponds to a typing rule. We now discuss each rule in detail.Rules t-idle and t-done deal with terminated processes. In t-idle the whole context mustbe unrestricted, since the idle process does not use any resource. Rule t-done is similar,except that the session x flagged by the process must have type • . This way, we enforce thecorrespondence between successful termination in processes and successful termination insession types. Rule t-var establishes that a process invocation is well typed provided that thetype of the parameters passed to the process match the expected ones and that any unusedresource has an unrestricted type. The premise A : t indicates that A is associated with thesequence of types t in the global map, ensuring that A is invoked with parameters of theright type. Observe that the type of such parameters must be safe. This way, we prevent touse as parameters resources whose type can be (passively) affected by a probabilistic choice. C O N C U R 2 0 2 0
Rules t-in and t-out deal with the exchange of a message y on session x . The rules updatethe type of x from the conclusion to the premise of the rule to account for the communication.As usual, a linear resource y being sent in a message is no longer available in the continuationof the process. As anticipated earlier, t-out requires the type of y to be safe, again to ensurethat the type of y does not suddenly change under the effect of a probabilistic choice.The typing rules described so far are fairly standard for any session calculus. We nowmove on to the part of the type system that handles probabilities. Rules t-left and t-right deal with selections. In these cases, the type of x must be of the form T p ⊕ S and the processcontinuation uses x according to either T or S respectively. The key aspect is the probability p with which the process selects “left”, which is 1 in the case of inl x and 0 in the case of inr x . These processes behave deterministically, hence the probability annotation in thesession type is trivial. Rule t-branch illustrates the typing of a branch, whereby a processreceives a choice from a session x and continues accordingly. The type of x must be of theform T p & S , where p is the probability with which the process will receive a “left” choice.The key part of the rule concerns all the other resources used by the process, which will beused according to Γ if the process receives a “left” choice and according to ∆ otherwise. Thatis, the process is becoming aware of a probabilistic choice that has been performed elsewhereand whose outcome is communicated on x . Depending on this information, the process usesits resources (not just x ) accordingly. The behavior of the process as a whole is described bythe combination Γ p (cid:1) ∆ of the contexts in the two branches. Recall that the p (cid:1) operator,when used on session types, is idempotent in all cases but for selections (Definition 3.5).Hence, Γ p (cid:1) ∆ is nearly the same as Γ and ∆, except that the probabilities with which somefuture selections will be performed on endpoints in Γ and ∆ may have been adjusted as aside effect of the information received from x . This mechanism enables the propagation ofprobabilistic choices through the system as messages are exchanged on sessions.Rule t-par deals with parallel compositions P | Q , where P and Q must use x accordingto dual session types. Writing Γ , ∆ in the conclusion of the rule ensures that P and Q do notshare any name other than x , thus preventing the creation of network topologies that maylead to deadlocks [12]. In the conclusion of the rule the type of x becomes of the form h p i torecord the fact that both endpoints of x have been used. The success probability p coincideswith that of one of the endpoints and is well defined since (cid:74) T (cid:75) = (cid:74) T (cid:75) . Rule t-choice dealswith probabilistic choices performed by a process and partially overlaps with t-branch in thatthe contexts of the two alternative evolutions of the process after the choice are combined by p (cid:1) . Finally, rule t-new removes a session x from the context when x is restricted.Let us now discuss the main properties enjoyed by well-typed processes. First andforemost, typing is preserved by reductions. (cid:73) Theorem 4.3 (subject reduction) . If Γ ‘ P and P → Q , then Γ ‘ Q . Although this result is considered standard, one detail makes it special in our setting.Specifically, we observe that the reduct Q is well typed in the very same environment usedfor typing P , despite the fact that a communication may have taken place on a session x in P , determining a change in the session types associated with the endpoints of x . Acommunication can occur only if P contains both endpoints for x , and more precisely if thereare two subprocesses of P that use x according to dual session types and that are composedin parallel using t-par . Then, x in Γ must be associated with a type of the form h p i , where p is the success probability of P . Then, Theorem 4.3 guarantees that not only the typing, butalso the success probability of sessions is preserved by reductions . This is counterintuitiveat first, given that a session may evolve through different branches each having differentsuccess probabilities. However, recall that probabilistic choices are persistent in our calculus, . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:11 meaning that the reduct Q accounts for all possible evolutions of P . This is what entailssuch strong formulation of Theorem 4.3.Next we turn our attention to termination. To this aim, we provide two characterizationsof process termination respectively concerning the present and the future states of a process. (cid:73) Definition 4.4 (immediate and eventual termination) . We say that P is terminated if P ↓ is derivable using the following axioms and rules: idle ↓ done x ↓ P ↓ Q ↓ P | Q ↓ P ↓ Q ↓ P p (cid:1) Q ↓ P ↓ ( x ) P ↓ We say that P terminates with probability p , notation P ⇓ p , if there exist ( P n ) , ( Q n ) and ( p n ) for n ∈ N such that P ⇒ P n p n (cid:1) Q n and P n ↓ for every n ∈ N and lim n →∞ p n = p . In words, P ↓ means that P does not contain any pending communications, whereas P ⇓ p means that P evolves with probability p to states in which there are no pendingcommunications. Our type system is not strong enough to guarantee (probable) termination.For example, the process Ω defined by Ω := Ω is well-typed and diverges. In general, however,well-typed processes are guaranteed to be deadlock free, as stated formally below. (cid:73) Theorem 4.5 (deadlock freedom) . If ∅ ‘ P and P ⇒ Q , then either Q → or Q ↓ . Note that deadlock freedom is not simply a bonus feature of our type system. Itis actually a requirement for proving the properties of the type system that specificallypertain probabilities, which we will discuss shortly. Before doing so, we need an operationalcharacterization of successful termination relative to a particular session. (cid:73)
Definition 4.6 (successful termination of a session) . We say that P successfully terminatessession x with probability p if P ↑ xp is derivable using the following axioms and rules: p-done done x ↑ x p-par-1 P ↑ xp P | Q ↑ xp p-par-2 Q ↑ xp P | Q ↑ xp p-res P ↑ xp x = y ( y ) P ↑ xp p-choice P ↑ xq Q ↑ xr P p (cid:1) Q ↑ xpq +(1 − p ) r p-any P ↑ x Axiom p-done states that a process of the form done x has successfully terminated session x with probability 1. The rules p-par- i state that the successful termination of a parallelcomposition P | Q with respect to a session x can be reduced to the successful terminationof either P or Q . In particular, we do not require that both P and Q have successfullyterminated x , for two reasons: first, it could be the case that P and Q are connected by asession different from x , hence only one among P and Q could own x ; second, if a processhas successfully terminated a session through one of its endpoints, then duality ensures thatthe peer owning the other endpoint cannot have pending operations on it, so the session as awhole can be considered successfully terminated even if only one peer has become done x .Rule p-res accounts for session restrictions in the expected way and p-choice statesthat the successful termination of x in a process distribution is obtained by weighing theprobabilities of successful termination of the processes in the distribution. Note that p-choice can be applied only if it is possible to derive the successful termination of x for all ofthe processes in the distribution, whereas in general only some of such processes will havesuccessfully terminated x . To account for this possibility, we can use p-any to approximate the probability of successful termination of x for any process to 0.The type system gives us an upper bound to the success probability of any session: (cid:73) Proposition 4.7. If x : h p i ‘ P and P ↑ xq , then q ≤ p . C O N C U R 2 0 2 0
In particular, a session with type h i cannot be successfully completed, which couldindicate a flaw in the system. The upper bound is matched exactly by terminated processes: (cid:73) Theorem 4.8. If x : h p i ‘ P and P (cid:88) → , then P ↑ xp . Note that Theorem 4.8 does not hold unless processes are deadlock free, whence the keyrole of Theorem 4.5. As stated, Theorem 4.8 appears of limited use since it only concernsprocesses that cannot reduce any further, whereas in general we are interested in computingthe probability of successful termination also for processes engaged in arbitrarily longinteractions, for which the predicate P (cid:88) → might never hold. It turns out that Theorem 4.8can be relativized to the probability that a process terminates, thus: (cid:73) Corollary 4.9 (relative success) . Let P ⇑ xp if there exist ( P n ) and ( p n ) such that P ⇒ P n and P n ↑ xp n for all n ∈ N and lim n →∞ p n = p . Then (1) x : h i ‘ P and P ⇓ p imply P ⇑ xp and (2) x : h p i ‘ P and P ⇓ imply P ⇑ xp . Property (1) states that a well-typed process using a session with type x : h i successfullycompletes the session with the same probability with which it terminates. Property (2)extends Theorem 4.8 to processes that are known to terminate with probability 1. (cid:73) Example 4.10.
Below is the type derivation for the process
Buyer from Example 2.1 using T from Example 3.2 and assuming the type assignment Buyer : T . t-done x : • ‘ done x t-idle x : ◦ ‘ idle t-left x : ◦ ⊕ T ‘ inl x t-var x : T, y : int ‘ Buyer h x i t-right x : ◦ ⊕ T, y : int ‘ inr x. Buyer h x i t-choice x : ◦ q ⊕ T, y : int ‘ inl x q (cid:1) inr x. Buyer h x i t-in x : ? int . ( ◦ q ⊕ T ) ‘ x ?( y ) . ( inl x q (cid:1) inr x. Buyer h x i ) t-branch x : • p & ? int . ( ◦ q ⊕ T ) ‘ case x [ done x, x ?( y ) . ( inl x q (cid:1) inr x. Buyer h x i )] t-out x : T ‘ x ! bid . case x [ done x, x ?( y ) . ( inl x p (cid:1) inr x. Buyer h x i )]Observe the application of t-choice , which turns the probabilistic choice ◦ q ⊕ T in theconclusion of the rule into a deterministic one in the two premises. There exists an analogousderivation for x : T ‘ Q where Q is the body of Seller h x i in Example 2.1. By taking p and q as in Example 3.4, we derive x : h i ‘ Buyer h x i | Seller h x i with one application of t-par . Itis easy to establish that this process terminates with probability 1, hence by Corollary 4.9(2)the buyer wins the auction with probability . (cid:4)(cid:73) Example 4.11.
The separation of probabilistic choices from the communication of infor-mation (“left” and “right” selections) that depends on such choices implies that there isno 1-to-1 correspondence between choices as seen in session types and choices performedby processes. Below are a few instances in which the type system performs a non-trivialreconciliation between the probability annotations in types and those in processes. The typederivations are detailed in Appendix B.1. The process case x [ inr y. done x, inl y. done y ] inverts a choice from session x to y , sothat it successfully completes x if and only if it does not successfully complete y . It iswell typed in the context x : • p & ◦ , y : • − p ⊕ ◦ , which reflects the effect of the inversion. The process case x [ case y [ inl z. done z, inr z ] , case y [ inr z, inr z ]] coalesces two choicesreceived from x and y into a choice sent on z . The process is well typed in the context x : ◦ p & ◦ , y : ◦ q & ◦ , z : • pq ⊕ ◦ , indicating that the success probability for z is theproduct of the probabilities of receiving “left” from both x and y . . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:13 The process inl x. inl x. done x (cid:1) inr x. inr x sends the same probabilistic choice twiceon session x . It is well typed in the context x : ( • ⊕ ◦ ) ⊕ ( ◦ ⊕ ◦ ) but not in thecontext x : ( • ⊕ ◦ ) ⊕ ( ◦ ⊕ ◦ ). Once the choice is communicated, subsequent “left” or“right” selections that depend on that choice become deterministic. (cid:4)(cid:73) Example 4.12 (Work sharing) . Consider a system C h x i | x ?( z ) .B h x, y, z i | A h y i modeling(from left to right) a master process C connected with two slave processes which can be“busy” handling jobs or “idle” waiting for jobs. The processes are defined as follows: C ( x ) := x ! job . case x [ done x, idle ] B ( x, y, job ) := y ! hi . (cid:0) inl x. inl y. done x p (cid:1) (cid:0) inr x. inl y q (cid:1) inr y.y ! x.y ! job .A h y i (cid:1)(cid:1) A ( y ) := y ?() . case y [ idle , y ?( x ) .y ?( z ) .B h x, y, z i ]The master sends a job to the first slave and waits for a notification indicating whetherthe job has been handled or not. Obviously, the master succeeds only in the first case. Abusy slave decides whether to handle the job (with probability p ) or not (with probability1 − p ). In the first case, it notifies the master and the idle slave that the job has been handledand terminates. In the second case, it decides whether to discard the job (with probability q )or to hand it over to the other slave (with probability 1 − q ). Note that the busy slave sendson y a dummy value to the idle one before taking any decision so that the type of y is safe when y is used in A h y i . This way, by the time the busy slave makes a probabilistic choicethat may affect (and will be communicated to) the idle slave, the idle slave is blocked on a case waiting for such choice, and therefore its typing can be suitably adjusted when it ismoved (by s-par-choice ) into the scope of the choice.Now, take T = ! unit . ( ◦ p − pq + q ⊕ ! S. ! int .T ) and S = • r ⊕ ◦ where max { p, q } > r = pp − pq + q . It is possible to show that the above composition is well typed under the globaltype assignments C : ! int .S , B : S, T, int and A : T , where we assume that job has type int .From the fact that the system terminates with probability 1, we conclude that the mastersucceeds with probability r . Details can be found in Appendix B.2. (cid:4) Type systems for probabilistic, concurrent programs.
Despite their close relationship withprocess algebras, many of which have been extensively studied in a probabilistic setting,there are few results concerning probabilistic variants of session types. A notable exceptionis [2], which considers a probabilistic variant of multiparty session types (MST)where global types are decorated by ranges of probabilities representing the degree oflikelihood for interactions to happen. Besides using MST while we use binary session types,a key difference is that [2] does not consider interleaved sessions. The effect of probabilisticchoices across different sessions and the type system presented therein ensures that theaggregate probability of all execution paths is 1, which in our case is guaranteed by thesemantics of the probabilistic choice operator in processes.The type system in [2] essentially checks that each choice in a process is made accordingto the probability range written in its type, i.e., a process chooses a branch with a probabilityvalue that lies within the range specified by its session type. Differently, a probability valuein our types does not necessarily translate into the same probability value in a process;moreover, the same probabilistic choice in a process may be reflected as different probabilitiesin different sessions, as illustrated in Example 4.11.Some type systems for probabilistic programs have been developed to characterize preciselythe space of the possible execution traces [38] or to ensure that well-typed programs do
C O N C U R 2 0 2 0 not leak secret information [17]. The work [53] considers a sub-structural type system fora probabilistic variant of the linear π -calculus. Although the type system is not concernedwith probabilities directly, there are interesting analogies with our typing discipline: it isonly by relying on the properties of well-typed processes – most notably, race and deadlockfreedom – that we are able to relate the probabilities in processes with those in types. Probabilistic models of concurrent processes.
The design of computational models thatcombine concurrency and probabilities has a long tradition [54, 49] and gave birth to a varietyof operational approaches [50] and concrete probabilistic extensions of well-known concurrencymodels, such as CCS [27], CSP [40, 25], Petri nets [9], Klaim [19], and name-passing processcalculi [28, 53, 42, 26]. Our language for processes can be seen as the session-based counterpartof (a synchronous version of) the simple probabilistic π -calculus [42], which features bothprobabilistic and non-deterministic choices. While non-deterministic choices in [42] correspondto the standard choice operator (+) of the π -calculus, we adopted a session discipline, andhence a choice is realised by communicating a label over a session.The development of a denotational semantics for languages that combine non-determinism,concurrency and probabilities has revealed challenging. On the one hand, probabilistic choicesdo not distribute over non-deterministic ones, i.e. , it matters whether the environment choosesbefore or after a probabilistic choice is made, as highlighted in [52]. This observation appearsto be reflected in our type system by the typing rules that require a term to be of a safe type, e.g. , when a session is delegated. Establishing a precise connection between these two notionsmay pave the way for generalisations of our probabilistic type combinator. On the other hand,probabilistic choices in a system need to be (probabilistically) independent. This problemis connected with the well-known confusion phenomenon , in which concurrent (and hence,independent) choices may influence each other ( e.g. , one choice may enable/disable somebranch in another choice). As shown in [1, 32, 11], confusion can be avoided by establishingan order in which choices are executed; essentially, by reducing concurrency. We remark thatthe session discipline imposed by our language – and rule t-par in particular – makes allprobabilistic choices independent (in a probabilistic sense). Probabilistic languages and analyses.
Probabilistic models are frequently used to proveproperties that can be expressed as reachability probabilities; they are then verified bymodel-checking [31]. Our types are also reachability properties related to the probability ofsuccessful completion of a session. Besides, our type system guarantees deadlock-freedom.Many approaches have been recently proposed for reasoning on probabilistic programs, e.g. ,deductive-style approaches based on separation logic [6, 51, 5], probabilistic strategy logic [3],proof of termination [23, 36], static analysis [55], and probabilistic symbolic execution [10].Typing has been used in the sequential setting to ensure almost-sure termination in aprobabilistic lambda calculus [34]. Our type system does not ensure termination, but it couldform the basis for a probabilistic termination analysis.
Deadlock-free sessions.
The technique we use for preventing deadlocks, which only ad-dresses tree-like network topologies, is directly inspired to logic-based session type systems [12].However, our probabilitic analysis is independent of the exact mechanism that enforces dead-lock freedom and applies to other type systems relying on richer type structures [45, 18]. . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:15
In this work we start the study of a type-based static analysis technique for reasoning onprobabilistic reachability problems in session-based systems. We relate a probabilistic variantof a session-based calculus (Section 2) with a probabilistic variant of binary session types(Section 3) and establish a correspondence between probability annotations in processes andthose in types (Section 4). By breaking down a complex system of communicating processesinto sessions, we are able to modularly infer properties concerning the (probable) evolutionof the system from the much simpler specifications described by session types.There are many developments that stem from this work addressing both technical andpractical problems. Here we discuss those looking more promising or intriguing.To make our approach practical, the type system must be supported by suitable typechecking and inference algorithms. Indeed, even though the typing rules are syntax directed,the probabilistic type combinator (Definition 3.5) is difficult to deal with because it isnot injective (the same type can result from combining types with different probabilityannotations). We are also considering extensions of the very same operator so that it isapplicable to “deep choices” that do not necessarily occur at the top level of a session type.This extension requires a careful balancing with the notion of type safety (Definition 4.2).Subtyping relations for session types [24] are important for addressing realistic program-ming scenarios. Given the already established connections between session subtyping and(fair) testing relations [35, 13, 44, 8, 46] and the extensive literature on probabilistic testingrelations [14, 43, 21, 20] and behavioral equivalences [39], the investigation of probabilisticvariants of session subtyping has solid grounds to build upon. A related problem is thatprocess models that feature both non-deterministic and probabilistic choices are known tobe difficult to model and analyze [21]. It could be the case that session-based systems withboth non-deterministic and probabilistic choices are easier to address thanks to their simplerstructure, as already observed in [53].Our analysis based on probabilistic session types can be extended in several ways. Forexample, it would be interesting to quantify the probability of (partial) execution tracesrather than (or in addition to) the reachability of “successful states”.As remarked in Section 3, reachability ensures uniqueness of solutions of the systemsof equations induced by Definition 3.3, but it could be interesting to analyse the spectraof solutions obtained when reachability is dropped. One could also study variants whereprobabilities are allowed to vary during the execution. For instance, one would like toanalyse recursive protocols where probabilities may decrease (or increase) at each iteration.In our setting this may spoil regularity (subtrees may be decorated with infinitely manyprobabilities), allowing one to give non finitary specifications. A possible way of tacklingthis problem is to allow imprecise probabilities in the types; this may retain regularity atthe cost of a coarser static analysis. Probability ranges could also be useful in those caseswhere probability annotations in processes are uncertain, possibly because they have beenestimated from execution traces [22]. Besides probabilities, there might be other methodssuitable to model the uncertainty behind the behavior of processes. Further approachesinclude the “possibilistic” one, where uncertainty is described using linguistic categorieswith fuzzy boundaries [56], information gap decision theory, where the impact of uncertainparameters is estimated by the deviation of errors [7], and interval analysis, where uncertainparameters are modelled as intervals and worst-case analysis is usually performed [41]. Wethink that probability annotations in session types may also support forms of static analysisaimed at quantifying the termination probability of session-based programs. Known type
C O N C U R 2 0 2 0 systems that ensure progress, deadlock and livelock freedom are often quite constraining onthe structure of well-typed programs [45, 15, 4]. It could be the case that switching to aprobabilistic setting broadens substantially the range of addressable programs.
References Samy Abbes and Albert Benveniste. True-concurrency probabilistic models: Branching cellsand distributed probabilities for event structures.
Information and Computation , 204(2):231–274, 2006. doi:10.1016/j.ic.2005.10.001 . Bogdan Aman and Gabriel Ciobanu. Probabilities in session types. In Mircea Marin andAdrian Craciun, editors,
Proceedings Third Symposium on Working Formal Methods, FROM2019, Timişoara, Romania, 3-5 September 2019 , volume 303 of
EPTCS , pages 92–106, 2019. doi:10.4204/EPTCS.303.7 . Benjamin Aminof, Marta Kwiatkowska, Bastien Maubert, Aniello Murano, and Sasha Rubin.Probabilistic strategy logic. In
Proceedings of the International Joint Conference on ArtificialIntelligence (IJCAI) , pages 32–38, 2019. doi:10.24963/ijcai.2019/5 . Stephanie Balzer, Bernardo Toninho, and Frank Pfenning. Manifest deadlock-freedom forshared session types. In
Proceedings of the European Symposium on Programming Languages(ESOP) , volume 11423, pages 611–639. Springer, 2019. doi:10.1007/978-3-030-17184-1_22 . Gilles Barthe, Justin Hsu, and Kevin Liao. A probabilistic separation logic.
Proc. ACMProgram. Lang. , 4(POPL):55:1–55:30, 2020. doi:10.1145/3371123 . Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja, and ThomasNoll. Quantitative separation logic: a logic for reasoning about probabilistic pointer programs.
Proc. ACM Program. Lang. , 3(POPL):34:1–34:29, 2019. doi:10.1145/3290347 . Yakov Ben-Haim.
Info-gap decision theory: decisions under severe uncertainty . AcademicPress, 2006. Giovanni Bernardi and Matthew Hennessy. Using higher-order contracts to model sessiontypes.
Logical Methods in Computer Science , 12(2), 2016. doi:10.2168/LMCS-12(2:10)2016 . Rémi Bonnet, Stefan Kiefer, and Anthony Widjaja Lin. Analysis of probabilistic basicparallel processes. In
Proceedings of the International Conference on Foundations of SoftwareScience and Computation Structures (FoSSaCS) , volume 8412, pages 43–57. Springer, 2014. doi:10.1007/978-3-642-54830-7_3 . Mateus Borges, Antonio Filieri, Marcelo d’Amorim, and Corina S. Pasareanu. Iterativedistribution-aware sampling for probabilistic symbolic execution. In
Proceedings of the JointMeeting on Foundations of Software Engineering (ESEC/FSE) , pages 866–877, 2015. doi:10.1145/2786805.2786832 . Roberto Bruni, Hernán C. Melgratti, and Ugo Montanari. Concurrency and probability:Removing confusion, compositionally.
Log. Methods Comput. Sci. , 15(4), 2019. doi:10.23638/LMCS-15(4:17)2019 . Luís Caires, Frank Pfenning, and Bernardo Toninho. Linear logic propositions as session types.
Math. Struct. Comput. Sci. , 26(3):367–423, 2016. doi:10.1017/S0960129514000218 . Giuseppe Castagna, Mariangiola Dezani-Ciancaglini, Elena Giachino, and Luca Padovani.Foundations of session types. In António Porto and Francisco Javier López-Fraguas, edi-tors,
Proceedings of the International Conference on Principles and Practice of DeclarativeProgramming (PPDP) , pages 219–230. ACM, 2009. doi:10.1145/1599410.1599437 . Rance Cleaveland, Zeynep Dayar, Scott A. Smolka, and Shoji Yuen. Testing preorders forprobabilistic processes.
Inf. Comput. , 154(2):93–148, 1999. doi:10.1006/inco.1999.2808 . Mario Coppo, Mariangiola Dezani-Ciancaglini, Nobuko Yoshida, and Luca Padovani. Globalprogress for dynamically interleaved multiparty sessions.
Math. Struct. Comput. Sci. , 26(2):238–302, 2016. doi:10.1017/S0960129514000188 . Bruno Courcelle. Fundamental properties of infinite trees.
Theor. Comput. Sci. , 25:95–169,1983. doi:10.1016/0304-3975(83)90059-2 . . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:17 David Darais, Ian Sweet, Chang Liu, and Michael Hicks. A language for probabilisticallyoblivious computation.
Proc. ACM Program. Lang. , 4(Proceedings of the ACM SIGPLANSymposium on Principles of Programming Languages (POPL)):50:1–50:31, 2020. doi:10.1145/3371118 . Ornela Dardha and Simon J. Gay. A new linear logic for deadlock-free session-typed processes.In Christel Baier and Ugo Dal Lago, editors,
Foundations of Software Science and ComputationStructures - 21st International Conference, FOSSACS 2018, Held as Part of the EuropeanJoint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece,April 14-20, 2018, Proceedings , volume 10803 of
Lecture Notes in Computer Science , pages91–109. Springer, 2018. doi:10.1007/978-3-319-89366-2_5 . Rocco De Nicola, Diego Latella, and Mieke Massink. Formal modeling and quantitativeanalysis of klaim-based mobile systems. In
Proceedings of the ACM symposium on Appliedcomputing (SAC) , pages 428–435, 2005. doi:10.1145/1066677.1066777 . Yuxin Deng, Rob Van Glabbeek, Matthew Hennessy, and Carroll Morgan. Testing finitaryprobabilistic processes. In
Proceedings of the International Conference on Concurrency Theory(CONCUR) , pages 274–288. Springer, 2009. doi:10.1007/978-3-642-04081-8_19 . Yuxin Deng, Rob J. van Glabbeek, Matthew Hennessy, Carroll Morgan, and Chenyi Zhang.Remarks on testing probabilistic processes.
Electron. Notes Theor. Comput. Sci. , 172:359–397,2007. doi:10.1016/j.entcs.2007.02.013 . Seyedeh Sepideh Emam and James Miller. Inferring extended probabilistic finite-state au-tomaton models from software executions.
ACM Trans. Softw. Eng. Methodol. , 27(1):4:1–4:39,2018. doi:10.1145/3196883 . Luis María Ferrer Fioriti and Holger Hermanns. Probabilistic termination: Soundness,completeness, and compositionality. In
Proceedings of the ACM SIGPLAN Symposium onPrinciples of Programming Languages (POPL) , pages 489–501. ACM, 2015. doi:10.1145/2775051.2677001 . Simon J. Gay and Malcolm Hole. Subtyping for session types in the pi calculus.
Acta Inf. ,42(2-3):191–225, 2005. doi:10.1007/s00236-005-0177-z . Sonja Georgievska and Suzana Andova. Probabilistic CSP: preserving the laws via restrictedschedulers. In
Proceedings of the International GI/ITG Conference on Measurement, Modelling,and Evaluation of Computing Systems and Dependability and Fault Tolerance (MMB/DFT) ,volume 7201, pages 136–150. Springer, 2012. doi:10.1007/978-3-642-28540-0_10 . Jean Goubault-Larrecq, Catuscia Palamidessi, and Angelo Troina. A probabilistic applied pi-calculus. In Zhong Shao, editor,
Programming Languages and Systems, 5th Asian Symposium,APLAS 2007, Singapore, November 29-December 1, 2007, Proceedings , volume 4807 of
LectureNotes in Computer Science , pages 175–190. Springer, 2007. doi:10.1007/978-3-540-76637-7_12 . Hans A. Hansson. Time and probabilities in specification and verification of real-time systems.In
Proceedings of the Euromicro workshop on Real-Time Systems (RTS) , pages 92–97, 1992. doi:10.1109/EMWRT.1992.637477 . Oltea Mihaela Herescu and Catuscia Palamidessi. Probabilistic asynchronous π -calculus.In Proceedings of the International Conference on Foundations of Software Science andComputation Structures (FoSSaCS) , volume 1784, pages 146–160. Springer, 2000. doi:10.1007/3-540-46432-8_10 . Kohei Honda. Types for dyadic interaction. In Eike Best, editor,
Proceedings of the InternationalConference on Concurrency Theory (CONCUR) , volume 715, pages 509–523. Springer, 1993. doi:10.1007/3-540-57208-2_35 . Hans Hüttel, Ivan Lanese, Vasco T. Vasconcelos, Luís Caires, Marco Carbone, Pierre-MaloDeniélou, Dimitris Mostrous, Luca Padovani, António Ravara, Emilio Tuosto, Hugo TorresVieira, and Gianluigi Zavattaro. Foundations of session types and behavioural contracts.
ACMComput. Surv. , 49(1):3:1–3:36, 2016. doi:10.1145/2873052 . C O N C U R 2 0 2 0 Joost-Pieter Katoen. The probabilistic model checking landscape. In Martin Grohe, EricKoskinen, and Natarajan Shankar, editors,
Proceedings of the 31st Annual ACM/IEEESymposium on Logic in Computer Science, LICS ’16, New York, NY, USA, July 5-8, 2016 ,pages 31–45. ACM, 2016. doi:10.1145/2933575.2934574 . Joost-Pieter Katoen and Doron A. Peled. Taming confusion for modeling and implement-ing probabilistic concurrent systems. In Matthias Felleisen and Philippa Gardner, editors,
Programming Languages and Systems - 22nd European Symposium on Programming, ESOP2013, Held as Part of the European Joint Conferences on Theory and Practice of Software,ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings , volume 7792 of
Lecture Notes inComputer Science , pages 411–430. Springer, 2013. doi:10.1007/978-3-642-37036-6_23 . John G. Kemeny and J. Laurie Snell.
Finite Markov Chains . Springer-Verlag, 1976. Ugo Dal Lago and Charles Grellois. Probabilistic termination by monadic affine sized typing.
ACM Trans. Program. Lang. Syst. , 41(2):10:1–10:65, 2019. doi:10.1145/3293605 . Cosimo Laneve and Luca Padovani. The pairing of contracts and session types. In
Concurrency,Graphs and Models, Essays Dedicated to Ugo Montanari on the Occasion of His 65th Birthday ,volume 5065, pages 681–700. Springer, 2008. doi:10.1007/978-3-540-68679-8_42 . Ondrej Lengál, Anthony Widjaja Lin, Rupak Majumdar, and Philipp Rümmer. Fair terminationfor parameterized probabilistic concurrent systems. In
Proceedings of the InternationalConference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) ,volume 10205, pages 499–517, 2017. doi:10.1007/978-3-662-54577-5_29 . Thomas Leventis. A deterministic rewrite system for the probabilistic λ -calculus. Math. Struct.Comput. Sci. , 29(10):1479–1512, 2019. doi:10.1017/S0960129519000045 . Alexander K. Lew, Marco F. Cusumano-Towner, Benjamin Sherman, Michael Carbin, andVikash K. Mansinghka. Trace types and denotational semantics for sound programmableinference in probabilistic languages.
Proc. ACM Program. Lang. , 4(POPL):19:1–19:32, 2020. doi:10.1145/3371087 . Natalia López and Manuel Núñez. An overview of probabilistic process algebras and theirequivalences. In
Validation of Stochastic Systems - A Guide to Current Research , volume 2925,pages 89–123. Springer, 2004. doi:10.1007/978-3-540-24611-4_3 . Gavin Lowe. Probabilistic and prioritized models of timed CSP.
Theor. Comput. Sci. ,138(2):315–352, 1995. doi:10.1016/0304-3975(94)00171-E . Ramon E. Moore, R. Baker Kearfott, and Michael J. Cloud.
Introduction to Interval Analysis .SIAM, 2009. doi:10.1137/1.9780898717716 . Gethin Norman, Catuscia Palamidessi, David Parker, and Peng Wu. Model checking theprobabilistic pi-calculus. In
Fourth International Conference on the Quantitative Evaluaitonof Systems (QEST 2007), 17-19 September 2007, Edinburgh, Scotland, UK , pages 169–178.IEEE Computer Society, 2007. doi:10.1109/QEST.2007.31 . Manuel Núñez and David Rupérez. Fair testing through probabilistic testing. In
Proceedings ofthe Joint International Conference on Formal Description Techniques for Distributed Systemsand Communication Protocols and Protocol Specification, Testing and Verification (PSTV) ,volume 156, pages 135–150. Kluwer, 1999. doi:10.1007/978-0-387-35578-8_8 . Luca Padovani. Fair subtyping for open session types. In
Proceedings of the InternationalColloquium on Automata, Languages, and Programming (ICALP) , volume 7966, pages 373–384.Springer, 2013. doi:10.1007/978-3-642-39212-2_34 . Luca Padovani. Deadlock and lock freedom in the linear π -calculus. In Proceedings of theJoint Meeting of the EACSL Annual Conference on Computer Science Logic and the AnnualACM/IEEE Symposium on Logic in Computer Science (CSL-LICS) , pages 72:1–72:10. ACM,2014. doi:10.1145/2603088.2603116 . Luca Padovani. Fair subtyping for multi-party session types.
Math. Struct. Comput. Sci. ,26(3):424–464, 2016. doi:10.1017/S096012951400022X . Benjamin C. Pierce.
Types and programming languages . MIT Press, 2002. . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:19 Jan J. M. M. Rutten, Marta Z. Kwiatkowska, Gethin Norman, David Parker, and PrakashPanangaden.
Mathematical techniques for analyzing concurrent and probabilistic systems ,volume 23 of
CRM monograph series . American Mathematical Society, 2004. Roberto Segala and Nancy Lynch. Probabilistic simulations for probabilistic processes.
NordicJournal of Computing , 2(2):250–273, 1995. Ana Sokolova and Erik P. de Vink. Probabilistic automata: System types, parallel compositionand comparison. In Christel Baier, Boudewijn R. Haverkort, Holger Hermanns, Joost-PieterKatoen, and Markus Siegle, editors,
Validation of Stochastic Systems - A Guide to CurrentResearch , volume 2925 of
Lecture Notes in Computer Science , pages 1–43. Springer, 2004. doi:10.1007/978-3-540-24611-4_1 . Joseph Tassarotti and Robert Harper. A separation logic for concurrent randomized programs.
Proc. ACM Program. Lang. , 3(POPL):64:1–64:30, 2019. doi:10.1145/3290377 . Daniele Varacca and Glynn Winskel. Distributing probability over non-determinism.
Math.Struct. Comput. Sci. , 16(1):87–113, 2006. doi:10.1017/S0960129505005074 . Daniele Varacca and Nobuko Yoshida. Probabilistic π -calculus and event structures. ElectronicNotes in Theoretical Computer Science , 190(3):147–166, 2007. doi:10.1016/j.entcs.2007.07.009 . Moshe Y. Vardi. Automatic verification of probabilistic concurrent finite-state programs. In , pages 327–338. IEEE Computer Society, 1985. doi:10.1109/SFCS.1985.12 . Di Wang, Jan Hoffmann, and Thomas W. Reps. PMAF: an algebraic framework for staticanalysis of probabilistic programs. In
Proceedings of the ACM SIGPLAN Conference onProgramming Language Design and Implementation (PLDI) , pages 513–528, 2018. doi:10.1145/3296979.3192408 . Lotfi A. Zadeh. Fuzzy sets.
Inf. Control. , 8(3):338–353, 1965. doi:10.1016/S0019-9958(65)90241-X . A Supplement to Section 3 (cid:73)
Example A.1.
Consider the type T in Example 3.4. The transition matrix P = [ p ij ] of itsassociated DTMC is shown below: P =
00 0 0 0 0 10
23 13 where S = • S = ◦ S = TS = • & ? int . ( ◦ ⊕ T ) S = ? int . ( ◦ ⊕ T ) S = ◦ ⊕ T Note that we have given P in its canonical form [33], in which we have partitioned P infour submatrices with the names and meaning described below in clockwise order, startingfrom the top-left corner of P : S is the 2-by-2 identity matrix giving the probability transitions among the absorbingstates. By definition of absorbing state, this is an identity matrix. O is the 2-by-4 matrix giving the probability transitions from the absorbing states to thetransient states. By definition, these probabilities are all zeros. Q is the 4-by-4 matrix giving the probability transitions among the transient states. R is the 4-by-2 matrix giving the probability transitions from the transient states to theabsorbing states. C O N C U R 2 0 2 0
Now, the probability of S being absorbed by S , i.e. , (cid:74) T (cid:75) , can be obtained from thematrix B = [ b ij ] which is computed as follows: B = ( I − Q ) − R = − −
00 0 1 − − −
00 00 =
43 43
13 43
49 49 43 4349 49 13 43
00 00 =
13 2313 2319 8919 89
Then, the probability of absorption for S = T is b . Hence, (cid:74) T (cid:75) = . (cid:4)(cid:73) Theorem A.2 ([33]) . Let P be the transition matrix of an absorbing DTMC and B ∗ bethe matrix of the absorption probabilities. Then, P B ∗ = B ∗ . Note that the column l of B ∗ , i.e. , [ b il ] contains the probabilities of s i being absorbed by s l . Consequently, b ll = 1 and b il = 0 for all absorbing states s i = s l . Also, the probability b il for non-absorbing states s i can be obtained by solving the system of linear equationscorresponding to l -column of B ∗ in the equality B ∗ = P B ∗ , i.e. , b ll = 1 b ii = 0 for all absorbing states s i = s l b il = P h p ih × b hl for all non-absorbing states s i When considering the DTMCs associated with session types there are exactly twoabsorbing states, namely • and ◦ . Moreover, we are interested in computing the column in B ∗ associated with • . If we write (cid:74) S i (cid:75) in place of b i l when S l = • , then the set of linearequations is (cid:74) • (cid:75) = 1 (cid:74) ◦ (cid:75) = 0 (cid:74) S i (cid:75) = P h p ih × (cid:74) S h (cid:75) for all S i
6∈ {◦ , •} (cid:73) Example A.3.
The system of equations for the DTMC in Example A.1 is (cid:74) • (cid:75) = 1 (cid:74) ◦ (cid:75) = 0 (cid:74) T (cid:75) = (cid:74) S (cid:75)(cid:74) S (cid:75) = (cid:74) • (cid:75) + (cid:74) S (cid:75)(cid:74) S (cid:75) = (cid:74) S (cid:75)(cid:74) S (cid:75) = (cid:74) ◦ (cid:75) + (cid:74) T (cid:75) Note in particular that the system of equations corresponds exactly to the one derived fromDefinition 3.3 and its solution is (cid:74) T (cid:75) = , (cid:74) S (cid:75) = , (cid:74) S (cid:75) = , (cid:74) S (cid:75) = . (cid:4) We conclude this section with the proof of Proposition 3.6. (cid:73)
Proposition 3.6. (cid:74) T p (cid:1) T (cid:75) = p (cid:74) T (cid:75) + (1 − p ) (cid:74) T (cid:75) . . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:21 Proof.
The only interesting case is when T = T q ⊕ S and T = T r ⊕ S . We have (cid:74) T p (cid:1) T (cid:75) = (cid:74) ( T q ⊕ S ) p (cid:1) ( T r ⊕ S ) (cid:75) by definition of T and T = (cid:74) T pq +(1 − p ) r ⊕ S (cid:75) by definition of p (cid:1) = ( pq + (1 − p ) r ) (cid:74) T (cid:75) + (1 − pq − (1 − p ) r ) (cid:74) S (cid:75) by definition of (cid:74) · (cid:75) = pq (cid:74) T (cid:75) + r (cid:74) T (cid:75) − pr (cid:74) T (cid:75) + (cid:74) S (cid:75) − pq (cid:74) S (cid:75) − r (cid:74) S (cid:75) + pr (cid:74) S (cid:75) p (cid:74) T (cid:75) + (1 − p ) (cid:74) T (cid:75) = p (cid:74) T q ⊕ S (cid:75) + (1 − p ) (cid:74) T r ⊕ S (cid:75) by definition of T and T = p ( q (cid:74) T (cid:75) + (1 − q ) (cid:74) S (cid:75) ) + (1 − p )( r (cid:74) T (cid:75) + (1 − r ) (cid:74) S (cid:75) ) by definition of (cid:74) · (cid:75) = pq (cid:74) T (cid:75) + p (cid:74) S (cid:75) − pq (cid:74) S (cid:75) + r (cid:74) T (cid:75) + (cid:74) S (cid:75) − r (cid:74) S (cid:75) − pr (cid:74) T (cid:75) − p (cid:74) S (cid:75) + pr (cid:74) S (cid:75) = pq (cid:74) T (cid:75) + r (cid:74) T (cid:75) − pr (cid:74) T (cid:75) + (cid:74) S (cid:75) − pq (cid:74) S (cid:75) − r (cid:74) S (cid:75) + pr (cid:74) S (cid:75) which confirms the statement. (cid:74) B ExamplesB.1 Typing of Example 4.11 The derivation below shows that case x [ inr y. done x, inl y. done y ] is well typed in thecontext x : • p & ◦ , y : • − p ⊕ ◦ . t-done x : • , y : ◦ ‘ done x t-right x : • , y : • ⊕ ◦ ‘ inr y. done x t-done x : ◦ , y : • ‘ done y t-left x : ◦ , y : • ⊕ ◦ ‘ inl y. done y t-branch x : • p & ◦ , y : • − p ⊕ ◦ ‘ case x [ inr y. done x, inl y. done y ] The following derivation shows that case x [ case y [ inl z. done z, inr z ] , case y [ inr z, inr z ]]is well typed in the context x : ◦ p & ◦ , y : ◦ q & ◦ , z : • pq ⊕ ◦ . t-done x : ◦ , y : ◦ , z : • ‘ done z t-left x : ◦ , y : ◦ , z : • ⊕ ◦ ‘ inl z. done z t-idle x : ◦ , y : ◦ , z : ◦ ‘ idle t-left x : ◦ , y : ◦ , z : • ⊕ ◦ ‘ inr z t-branch x : ◦ , y : ◦ q & ◦ , z : • q ⊕ ◦ ‘ case y [ inl z. done z, inr z ] ... t-branch x : ◦ p & ◦ , y : ◦ q & ◦ , z : • pq ⊕ ◦ ‘ case x [ case y [ inl z. done z, inr z ] , case y [ inr z, inr z ]] We illustrate below that inl x. inl x. done x (cid:1) inr x. inr x cannot be typed with thecontext x : ( • ⊕ ◦ ) ⊕ ( ◦ ⊕ ◦ ). A x : • ⊕ ◦ ‘ inl x. done x t-left x : ( • ⊕ ◦ ) ⊕ ◦ ‘ inl x. inl x. done x x : ◦ ‘ idle t-right x : ( • ⊕ ◦ ) ⊕ ◦ ‘ inr x t-choice x : ( • ⊕ ◦ ) ⊕ ◦ ‘ ( inl x. inl x. done x ) (cid:1) inr x B.2 Typing of Example 4.12
We first show that the defining equation for the process variable C is well typed, i.e. , thatthe judgement x : ! int . ( • r & ◦ ) ‘ x ! job . case x [ done x, idle ] holds (when assuming job is oftype int ). C O N C U R 2 0 2 0 t-done x : • ‘ done x t-idle x : ◦ ‘ idle t-branch x : • r & ◦ ‘ case x [ done x, idle ] safe ( int ) t-out x : ! int . ( • r & ◦ ) ‘ x ! job . case x [ done x, idle ]We now consider the defining equation for the process variable B . For presentationpurposes we consider first the derivations for three different subterms corresponding to thealternative choices in the definition. In particular, x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inl x. inl y. done x Equation (B.1); x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inr x. inl y (B.2); x : S, y : ◦ ⊕ ! S. ! int .T , job : int ‘ inr y.y ! x.y ! job .A h y i (B.3). t-done x : • , y : ◦ , job : int ‘ done x t-left x : • , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inl y. done x t-left x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inl x. inl y. done x (B.1) t-idle x : ◦ , y : ◦ , job : int ‘ idle t-left x : ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inl y t-right x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ inr x. inl y (B.2) A : T safe ( T ) t-var y : T ‘ A h y i safe ( int ) t-out y : ! int .T , job : int ‘ y ! job .A h y i safe ( S ) t-out x : S, y : ! S. ! int .T , job : int ‘ y ! x.y ! job .A h y i t-right x : S, y : ◦ ⊕ ! S. ! int .T , job : int ‘ inr y.y ! x.y ! job .A h y i (B.3)Then, the derivation for the right-most probabilistic choice in the definition of B isobtained from Equation (B.2) and Equation (B.3) as follows.... (B.2) x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ . . . ... (B.3) x : S, y : ◦ ⊕ ! S. ! int .T , job : int ‘ . . . t-choice x : • (1 − q ) r ⊕ ◦ , y : ◦ q ⊕ ! S. ! int .T , job : int ‘ inr x. inl y q (cid:1) inr y.y ! x.y ! job .A h y i (B.4)The derivation for the definition of B is obtained as follows.... (B.1) x : • ⊕ ◦ , y : ◦ ⊕ ! S. ! int .T , job : int ‘ . . . ... (B.3) x : • (1 − q ) r ⊕ ◦ , y : ◦ q ⊕ ! S. ! int .T , job : int ‘ . . . t-choice x : • p +(1 − q )(1 − q ) r ⊕ ◦ , y : ◦ p +(1 − p ) q ⊕ ! S. ! int .T , job : int ‘ . . . p (cid:1) . . . t-out x : • p +(1 − q )(1 − q ) r ⊕ ◦ , y : ! unit . ( ◦ p +(1 − p ) q ⊕ ! S. ! int .T ) , job : int ‘ y ! hi . . . . p (cid:1) . . . . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:23 (B.5)The proof is completed by noting that p + (1 − p ) (1 − q ) pp − pq + q = pp − pq + q = r , and p + (1 − p ) q = p − pq + q .We show that the definition of A is well typed with the derivation below. t-idle y : ◦ ‘ idle B : S, T, int safe ( S, T, int ) t-var x : S, y : T, z : int ‘ B h x, y, z i t-in x : S, y : ? int .T ‘ y ?( z ) .B h x, y, z i t-in y : ? S. ? int .T ‘ y ?( x ) .y ?( z ) .B h x, y, z i t-branch y : ◦ p − pq + q & ? S. ? int .T ‘ case y [ idle , y ?( x ) .y ?( z ) .B h x, y, z i ] t-in y : ? unit . ( ◦ p − pq + q & ? S. ? int .T ) ‘ y ?() . case y [ idle , y ?( x ) .y ?( z ) .B h x, y, z i ]The typing for the composition C h x i | x ?( z ) .B h x, y, z i | A h y i is obtained as follows. C : ! int .S safe (! int .S ) t-var x : ! int .S ‘ C h x i B : S, T, int safe ( S, T, int ) t-var x : S, y : T, z : int ‘ B h x, y, z i t-in x : ? int .S, y : T ‘ x ?( z ) .B h x, y, z i A : T safe ( T ) t-var y : T ‘ A h y i t-par x : ? int .S, y : h (cid:74) T (cid:75) i ‘ x ?( z ) .B h x, y, z i | A h y i t-par x : h (cid:74) ? int .S (cid:75) i , y : h (cid:74) T (cid:75) i ‘ C h x i | x ?( z ) .B h x, y, z i | A h y i Finally, we compute the success probabilities: (cid:74) ? int .S (cid:75) = (cid:74) S (cid:75) = r (cid:74) • (cid:75) + (1 − r ) (cid:74) ◦ (cid:75) = r , and (cid:74) T (cid:75) = 0 since T cannot reach • . The complete computation is as follows. (cid:74) T (cid:75) = (cid:74) ◦ p − pq + q ⊕ ! S. ! int .T (cid:75) = ( p − pq + q ) (cid:74) ◦ (cid:75) + r (cid:74) ! S. ! int .T (cid:75) where r = (1 − ( p − pq + q ))= r (cid:74) ! S. ! int .T (cid:75) by (cid:74) ◦ (cid:75) = 0= r (cid:74) ! int .T (cid:75) = r (cid:74) T (cid:75) = r (cid:74) ◦ p − pq + q & ? S. ? int .T (cid:75) = r ( p − pq + q ) (cid:74) ◦ (cid:75) + r (cid:74) ? S. ? int .T (cid:75) = r (cid:74) ? S. ? int .T (cid:75) by (cid:74) ◦ (cid:75) = 0= r (cid:74) ? int .T (cid:75) = r (cid:74) T (cid:75) whose unique solution is (cid:74) T (cid:75) = 0 (for 0 < p, q < C Proof of Theorem 4.3 (cid:73)
Lemma C.1. If t (cid:1) s is defined, then t (cid:1) s = t . Proof.
The only interesting case is when t = s and this can happen in two cases only. If t = h p i and s = h q i , then we conclude t (cid:1) s = h p i . If t = T p ⊕ S and s = T q ⊕ S , then weconclude t (cid:1) s = T p ⊕ S . (cid:74) The next result shows that, if the very same process can be typed in two different contexts,then the success probabilities of the session types in the two contexts is the same. In general
C O N C U R 2 0 2 0 it is not true that the session types themselves are the same, because t-left and t-right allow selections to be typed differently as far as the non-selected branch is concerned. Let ’ be the smallest equivalence relation on types such that T ’ S if (cid:74) T (cid:75) = (cid:74) S (cid:75) . We write Γ ’ ∆if Γ( x ) ’ ∆( x ) for every x ∈ dom (Γ) ∩ dom (∆). (cid:73) Lemma C.2. If Γ i ‘ P for i = 1 , and dom (Γ ) = dom (Γ ) , then Γ ’ Γ . Proof.
By induction on the structure of P and by cases on its shape. We only discuss a fewrepresentative cases, the others being similar or simpler. P = idle Then un (Γ i ) for i = 1 , ’ Γ by observing that types of theform h p i are not unrestricted and that the only unrestricted session type is ◦ . P = done x Then there exist Γ and Γ such that Γ i = Γ i , x : • for i = 1 ,
2. We concludeΓ ’ Γ by the same observations made in the previous case. P = A h x i Then there exist Γ and Γ such that Γ i = Γ i , x : t and un (Γ i ) for i = 1 , A : t . We conclude Γ ’ Γ by the same observations made in the previous cases. P = x ?( y ) .Q Then there exist Γ , Γ , t , t , T and T such that Γ i = Γ i , x : ? t i .T i andΓ i , x : T i , y : t i ‘ Q for i = 1 ,
2. Using the induction hypothesis we deduce Γ ’ Γ and t ’ t and T ’ T . We conclude Γ ’ Γ since (cid:74) ? t .T (cid:75) = (cid:74) T (cid:75) = (cid:74) T (cid:75) = (cid:74) ? t .T (cid:75) . P = P p (cid:1) P Then there exist Γ ij for 1 ≤ i, j ≤ i = Γ i p (cid:1) Γ i and Γ ij ‘ P j for 1 ≤ i, j ≤
2. Using the induction hypothesis we deduce Γ j ’ Γ j for all j = 1 ,
2. Weconclude Γ = Γ p (cid:1) Γ ’ Γ p (cid:1) Γ = Γ . P = inl x.Q Then there exist Γ , Γ , T , T , S and S such that Γ i = Γ i , x : T i ⊕ S i andΓ i , x : T i ‘ Q for i = 1 ,
2. Using the induction hypothesis we deduce Γ ’ Γ and T ’ T .We conclude Γ ’ Γ by observing that (cid:74) T ⊕ S (cid:75) = (cid:74) T (cid:75) = (cid:74) T (cid:75) = (cid:74) T ⊕ S (cid:75) . P = P | P From t-par we deduce that there exist Γ , Γ , Γ , Γ , T and T suchthat Γ i = Γ i , Γ i , x : h (cid:74) T i (cid:75) i and Γ i , x : T i ‘ P and Γ i , x : T i ‘ P for i = 1 ,
2. Using theinduction hypothesis we deduce Γ j ’ Γ j for j = 1 , T ’ T namely (cid:74) T (cid:75) = (cid:74) T (cid:75) . Weconclude Γ ’ Γ . (cid:74) The next result shows that a process becoming aware of a probabilistic choice can betyped differently so as to account for the probabilistic information transmitted with thechoice. This is the key lemma that allows us to deal with s-par-choice . Note that, as theprocess may be connected with other processes through sessions, the information concerningthe probabilistic choice may need to propagate along an arbitrary number of sessions. (cid:73)
Lemma C.3. If Γ , x : T r (cid:1) T ‘ P , then there exist Γ and Γ such that Γ r (cid:1) Γ = Γ and Γ i , x : T i ‘ P for every i = 1 , . Proof. If T = T we conclude immediately by taking Γ = Γ = Γ, so from now on weassume T = T which can happen only when T and T are a choice. We proceed byinduction on the derivation of Γ , x : T r (cid:1) T ‘ P and by cases on the last rule applied. Wediscuss only interesting cases, particularly those compatible with the assumption T = T . t-var Then P = A h x i . From t-var we deduce:Γ , x : t = ∆ , x : T r (cid:1) T ; un (∆); A : t ; . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:25 safe ( t ).Since T and T are choices, they cannot be unrestricted. Therefore, x must be one of thevariables in x and T r (cid:1) T is one of the types in t . But then T r (cid:1) T is a branch, which isnot a safe type according to Definition 4.2. We conclude that this case is impossible. t-branch when x is the endpoint being used for input Then P = case x [ P , P ]. From t-branch we deduce that there exist ∆ , ∆ , S and S such that:∆ p (cid:1) ∆ = Γ; T r (cid:1) T = S p & S ;∆ i , x : S i ‘ P i for i = 1 , p and p such that T i = S p i & S and p = rp + (1 − r ) p . Let Γ i def = ∆ p i (cid:1) ∆ and observe that (∆ p (cid:1) ∆ ) p (cid:1) (∆ p (cid:1) ∆ ) = Γ.We conclude Γ i , x : S p i & S ‘ case x [ P , P ] with an application of t-branch . t-par Then P = Q | R . Since T r (cid:1) T is a session type and not a type of the form h q i , x cannot be used by both Q and R . We consider only the case in which x is used by Q , theother case being symmetric. From t-par we deduce:∆ , y : S, x : T r (cid:1) T ‘ Q ;∆ , y : S ‘ R ;Γ = ∆ , ∆ , y : h (cid:74) S (cid:75) i .Using the induction hypothesis we deduce that there exist ∆ , ∆ , S and S suchthat (∆ , y : S ) r (cid:1) (∆ , y : S ) = ∆ , y : S and ∆ i , y : S i , x : T i ‘ Q for i = 1 ,
2. Inparticular, we have S = S r (cid:1) S . Using the induction hypothesis once again, we deducethat there exist ∆ and ∆ such that ∆ r (cid:1) ∆ = ∆ and ∆ i , y : S i ‘ for i = 1 ,
2. LetΓ i def = ∆ i , ∆ i , y : h (cid:74) S i (cid:75) i and observe that Γ r (cid:1) Γ = Γ. We conclude Γ i , x : T i ‘ P for i = 1 , t-par . t-choice Then we have: P = P p (cid:1) P for some P and P ; T r (cid:1) T = S p (cid:1) S for some S , S and p ;∆ p (cid:1) ∆ = Γ for some ∆ and ∆ ;∆ k , x : S k ‘ P k for k = 1 , T and T are choices, S and S must be branches. Since the combination ofbranches is only defined when they are exactly the same, we deduce S = S = T r (cid:1) T .Using the induction hypothesis, we deduce that for every k = 1 , k and ∆ k such that ∆ k r (cid:1) ∆ k = ∆ k and ∆ ki , x : T i ‘ P k for i = 1 ,
2. Let Γ i def = ∆ i p (cid:1) ∆ i for i = 1 , r (cid:1) Γ = Γ. We conclude Γ i , x : T i ‘ P for i = 1 , t-choice . (cid:74) We now have all the ingredients to show that typing is preserved by structural pre-congruence. (cid:73)
Lemma C.4. If Γ ‘ P and P (cid:52) Q , then Γ ‘ Q . Proof.
By induction on the derivation of P (cid:52) Q and by cases on the last rule applied. Weonly discuss a few selected cases, the others being simpler or trivial. s-no-choice Then we have P = Q (cid:1) R . From t-choice we deduce that there exist Γ andΓ such that Γ = Γ (cid:1) Γ and Γ ‘ Q and Γ ‘ R . Using Lemma C.1 we conclude Γ = Γ . C O N C U R 2 0 2 0 s-choice-idem
Then we have P = Q p (cid:1) Q . From t-choice we deduce that there exist Γ and Γ such that Γ = Γ p (cid:1) Γ and Γ i ‘ Q for i = 1 ,
2. By Lemma C.2 we deduce Γ ’ Γ .It is a simple exercise to show that Γ = Γ p (cid:1) Γ and Γ ’ Γ imply Γ = Γ = Γ , whichsuffices to conclude. s-par-choice Then we have: P = ( P p (cid:1) P ) | R ; Q = ( P | R ) p (cid:1) ( P | R ).From t-par and t-choice we deduce:Γ = (Γ p (cid:1) Γ ) , ∆ , x : h (cid:74) T p (cid:1) T (cid:75) i ;Γ i , x : T i ‘ P i for i = 1 , , x : T p (cid:1) T ‘ R .Using Lemma C.3 we deduce that there exist ∆ and ∆ such that ∆ p (cid:1) ∆ = ∆ and∆ i , x : T i ‘ R for every i = 1 ,
2. We derive Γ i , ∆ i , x : h (cid:74) T i (cid:75) i ‘ P i | R for i = 1 , t-par .We conclude (Γ p (cid:1) Γ ) , (∆ p (cid:1) ∆ ) , x : h (cid:74) T (cid:75) i p (cid:1) h (cid:74) T (cid:75) i ‘ Q observing that h (cid:74) T (cid:75) i p (cid:1) h (cid:74) T (cid:75) i = h p (cid:74) T (cid:75) + (1 − p ) (cid:74) T (cid:75) i by Definition 3.5= h (cid:74) T p (cid:1) T (cid:75) i by Proposition 3.6 s-par-assoc Then we have P = ( P | P ) | P and Q = P | ( P | P ) and fn ( P ) ∩ fn ( P ) = ∅ .From t-par we deduce:Γ = ∆ , Γ , x : h (cid:74) T (cid:75) i ;∆ , x : T ‘ P | P ;Γ , x : T ‘ P .From fn ( P ) ∩ fn ( P ) = ∅ and dom (∆) ∩ Γ = ∅ we deduce x ∈ fn ( P ). Hence, from t-par we deduce:∆ = Γ , Γ , y : h (cid:74) S (cid:75) i ;Γ , y : S ‘ P ;Γ , x : T, y : S ‘ P .We derive Γ , Γ , x : T, y : h (cid:74) S (cid:75) i ‘ P | P with one application of t-par and we concludeΓ , Γ , Γ , x : h (cid:74) T (cid:75) i , y : h (cid:74) S (cid:75) i ‘ P | ( P | P ) with another application of t-par . (cid:74)(cid:73) Theorem 4.3 (subject reduction) . If Γ ‘ P and P → Q , then Γ ‘ Q . Proof.
By induction on the derivation of P → Q and by cases on the last rule applied. Sincetyping is syntax directed, in each case we can use the typing rule corresponding to the shapeof the term under consideration. r-com Then there exist x , y , P and P such that: P = x ! y.P | x ?( y ) .P ; Q = P | P .From t-par , t-out and t-in we deduce that there exist Γ , Γ , t and T such that:Γ = Γ , Γ , x : h (cid:74) ! t.T (cid:75) i , y : t ;Γ , x : T ‘ P ;Γ , x : T , y : t ‘ P .We conclude Γ ‘ P | P with one application of t-par and observing that (cid:74) ! t.T (cid:75) = (cid:74) T (cid:75) by Definition 3.3. r-left Then there exist x , P , Q and Q such that: . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:27 P = inl x.P | case x [ Q , Q ]; Q = P | Q .From t-par , t-left and t-branch we deduce that there exist Γ , ∆, T and S such that:Γ = Γ , ∆ , x : h (cid:74) T ⊕ S (cid:75) i ;Γ , x : T ‘ P ;∆ , x : T ‘ Q .We conclude Γ ‘ Q with one application of t-par and observing that (cid:74) T ⊕ S (cid:75) = (cid:74) T (cid:75) byDefinition 3.3. r-par Then there exist P , P and P such that: P = P | P for some P and P ; P → P ; Q = P | P .From t-par we deduce that there exist Γ , Γ , x and T such that:Γ = Γ , Γ , x : h (cid:74) T (cid:75) i ;Γ , x : T ‘ P ;Γ , x : T ‘ P .Using the induction hypothesis we deduce Γ , x : T ‘ P . We conclude Γ ‘ Q with oneapplication of t-par . r-new Then there exist x , R and R such that: P = ( x ) R ; R → R ; Q = ( x ) R .From t-new we deduce that there exist ∆ and p such that:Γ = ∆ , x : h p i ;∆ , x : h p i ‘ R .Using the induction hypothesis we deduce that ∆ , x : h p i ‘ R and we conclude Γ ‘ Q with one application of t-new . r-choice Then there exist P , P , P and p such that: P = P p (cid:1) P ; P → P ; Q = P p (cid:1) P .From t-choice we deduce that there exist Γ and Γ such that:Γ = Γ p (cid:1) Γ ;Γ i ‘ P i for all i = 1 , ‘ P and we conclude with one applicationof t-choice . r-struct Then we have P (cid:52) R → R (cid:52) Q for some R and R . From Γ ‘ P and Lemma C.4we deduce Γ ‘ R . Using the induction hypothesis we deduce that Γ ‘ R . From Lemma C.4we conclude Γ ‘ Q . (cid:74) C O N C U R 2 0 2 0
D Proof of Theorem 4.5
In this appendix we develop the proof that well-typed processes are deadlock free. First ofall, we introduce the auxiliary notion of hyper-context which will be useful in the proof ofTheorem 4.5. An hypercontext H is a non-empty multiset of contexts written Γ (cid:35) . . . (cid:35) Γ n . Wewrite dom ( H ) for the union of the domains of the contexts in H and H (cid:35) H for the multisetunion of H and H .If we think of a context as of the abstraction of well-typed process, then an hyper-contextintuitively represents a parallel composition of such processes and a well-formed hyper-contextis one that represents a well-typed parallel composition of the same processes. Formally: (cid:73) Definition D.1 (well-formed hyper-context) . We say that H is well formed if there exists Γ such that H ‘ Γ is derivable using the following axiom and rule: Γ ‘ Γ H ‘ Γ , x : T H ‘ ∆ , x : T H (cid:35) H ‘ Γ , ∆ , x : h (cid:74) T (cid:75) i Note that the rightmost rule establishing the well formedness of an hyper-context corre-sponds to t-par in the typing of processes. A simple induction on the derivation of
H ‘
Γsuffices to establish that dom ( H ) = dom (Γ).We now show that there is a relationship between well-formed hyper-contexts and theabsence of cycles in the (contexts of the) processes that are composed in parallel. (cid:73) Definition D.2 (acyclic hyper-context) . We say that H has a cycle if there exist n pairwisedistinct x , . . . , x n and n pairwise distinct Γ , . . . , Γ n ∈ H with n ≥ such that x i ∈ dom (Γ i ) ∩ dom (Γ ( i mod n )+1 ) for very ≤ i ≤ n . We say that H is acyclic if it has no cycle. (cid:73) Proposition D.3. If H is well formed, then it is acyclic. Proof.
We prove a more general result, namely that
H ‘
Γ implies that H is acyclic. Weproceed by induction on the derivation of H ‘
Γ. In the base case we have H = Γ, hence H isacyclic because a cycle requires two or more contexts. Suppose H = H (cid:35) H and H ‘ Γ , x : T and H ‘ Γ , x : T and Γ = Γ , Γ , x : h (cid:74) T (cid:75) i . By induction hypothesis both H and H areacyclic. Hence, any cycle of H must involve two distinct names x ∈ dom (Γ , x : T ) and x ∈ dom (Γ , x : T ) that connect a context in H and a context in H . However, H and H share just the name x because dom (Γ ) ∩ dom (Γ ) = ∅ . Therefore, H is acyclic. (cid:74) The next step towards the proof of deadlock freedom is to prove a proximity lemma showingthat, whenever two well-typed processes share a name – that is, when they are connectedby a session – it is always possible to rearrange them using structural pre-congruence andrespecting typing in such a way that they sit next to each other and can possibly reduce. Todo so, we introduce some standard notation for process contexts: (cid:73)
Definition D.4 (process context) . A process context is a process containing a finite numberof unguarded “holes” [ ] . Formally, it is a term generated by the following grammar: C , D ::= [ ] | P | C | D | C p (cid:1) D | ( x ) C If C is a context with n holes numbered from left to right according to the syntax of C , wewrite C [ P ] · · · [ P n ] for the process obtained by filling the i -th hole with P i . Note that filling ahole differs from substitution in that it may capture names, for example if P i is inserted inthe scope of a binder. By writing C [ P ] · · · [ P n ] , we implicitly assume that C has n holes. . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:29 Here is the proximity lemma. The hypothesis x ∈ ( fn ( P ) \ bn ( C )) ∩ fn ( Q ) makes sure thatthe name x showing up in the context Γ , x : T is the very same x that occurs free in P . (cid:73) Lemma D.5 (proximity lemma) . If Γ , x : T ‘ C [ P ] and ∆ , x : T ‘ Q and x ∈ ( fn ( P ) \ bn ( C )) ∩ fn ( Q ) and dom (Γ) ∩ dom (∆) = ∅ , then there exists D such that C [ P ] | Q (cid:52) D [ P | Q ] and Γ , ∆ , x : h (cid:74) T (cid:75) i ‘ D [ P | Q ] . Proof.
By induction on C . We omit symmetric cases. C = [ ] We conclude by taking D def = [ ] with one application of t-par . C = R | C From t-par we deduce Γ = Γ , Γ , y : h (cid:74) S (cid:75) i and Γ , y : S ‘ R and Γ , y : S, x : T ‘ C [ P ]. Note that x = y , because the type of x in the context used for typing C [ P ] is asession type and not a type of the form h r i . Using the induction hypothesis we deduce thatthere exists D such that C [ P ] | Q (cid:52) D [ P | Q ] and Γ , y : S, ∆ , x : h (cid:74) T (cid:75) i ‘ D [ P | Q ]. Let D def = R | D . We derive C [ P ] | Q = ( R | C [ P ]) | Q by definition of C (cid:52) R | ( C [ P ] | Q ) by s-par-assoc using x ∈ fn ( C [ P ]) ∩ fn ( Q ) (cid:52) R | D [ P | Q ] by property of D = D [ P | Q ] by definition of D and we conclude with one application of t-par . C = R p (cid:1) C From t-choice we deduce Γ , x : T = (Γ , x : T ) p (cid:1) (Γ , x : T ) and Γ , x : T ‘ R and Γ , x : T ‘ C [ P ]. In particular, T = T p (cid:1) T . By Lemma C.3 we deduce thatthere exist ∆ and ∆ such that ∆ = ∆ p (cid:1) ∆ and ∆ i , x : T i ‘ Q for i = 1 ,
2. Using theinduction hypothesis we deduce that there exists D such that C [ P ] | Q (cid:52) D [ P | Q ] andΓ , ∆ , x : h (cid:74) T (cid:75) i ‘ D [ P | Q ]. Let D def = ( R | Q ) p (cid:1) D . We derive C [ P ] | Q = ( R p (cid:1) C [ P ]) | Q by definition of C (cid:52) ( R | Q ) p (cid:1) ( C [ P ] | Q ) by s-par-choice (cid:52) ( R | Q ) p (cid:1) D [ P | Q ] by property of D = D [ P | Q ] by definition of D We derive Γ , ∆ , x : h (cid:74) T (cid:75) i ‘ R | Q using t-par and we conclude with one application of t-choice , observing that h (cid:74) T (cid:75) i = h (cid:74) T p (cid:1) T (cid:75) i = h (cid:74) T (cid:75) i p (cid:1) h (cid:74) T (cid:75) i by Proposition 3.6. C = ( y ) C From t-new we deduce Γ , y : h (cid:74) S (cid:75) i , x : T ‘ C [ P ]. Since y is bound we mayassume, without loss of generality, that y fn ( Q ). Using the induction hypothesis we deducethat there exists D such that C [ P ] | Q (cid:52) D [ P | Q ] and Γ , y : h (cid:74) S (cid:75) i , x : h (cid:74) T (cid:75) i ‘ D [ P | Q ].Let D def = ( y ) D . We derive C [ P ] | Q = ( y ) C [ P ] | Q by definition of C (cid:52) ( y )( C [ P ] | Q ) by s-par-new (cid:52) ( y ) D [ P | Q ] by property of D = D [ P | Q ] by definition of D and we conclude with one application of t-new . (cid:74) We now show that well-typed processes can be rewritten in a normal form in whichall the restrictions and probabilistic choices have been “pushed outwards”, so that all theparallel compositions concern sequential processes.
C O N C U R 2 0 2 0 (cid:73)
Definition D.6 (prefixed, sequential and exposed process) . A process is prefixed if it hasthe form x ?( y ) .P or x ! y.P or inl x.P or inr x.P or case x [ P, Q ] . A process is sequential ifit is either prefixed or it has the form idle or done x or A h x i . A process is exposed if it isa parallel composition of sequential processes. (cid:73) Definition D.7 (process normal form) . A process is in normal form if it is generated bythe grammar P nf ::= P | ( x ) P nf | P nf p (cid:1) Q nf where P is an exposed process. (cid:73) Lemma D.8. If P is in normal form and P is exposed and Γ , x : T ‘ P and Γ , x : T ‘ P and dom (Γ ) ∩ dom (Γ ) = ∅ , then there exists P in normal form such that P | P (cid:52) P and Γ , Γ , x : h (cid:74) T (cid:75) i ‘ P . Proof.
A simple induction on the structure of P recalling that it is in normal form. In thebase case, when P is exposed, P | P is already in normal form and the result follows byreflexivity of (cid:52) and one application of t-par . The inductive cases are analogous to the onesdiscussed in the proof of Lemma D.5. (cid:74)(cid:73) Lemma D.9. If P and P are in normal form and Γ , x : T ‘ P and Γ , x : T ‘ P and dom (Γ ) ∩ dom (Γ ) = ∅ , then there exists P in normal form such that P | P (cid:52) P and Γ , Γ , x : h (cid:74) T (cid:75) i ‘ P . Proof.
A simple induction on P recalling that it is in normal form. In the base case, when P is exposed, the result follows from Lemma D.8. (cid:74)(cid:73) Lemma D.10 (normal form) . If Γ ‘ P , then there exists Q in normal form such that P (cid:52) Q and Γ ‘ Q . Proof.
By induction on P and by cases on its shape. P is sequential Then P is already in normal form and there is nothing left to prove. P = P p (cid:1) P From t-choice we deduce Γ = Γ p (cid:1) Γ and Γ i ‘ P i for i = 1 ,
2. Using theinduction hypothesis we deduce that there exist Q and Q in normal form such that P i (cid:52) Q i and Γ i ‘ Q i for i = 1 ,
2. Let Q def = Q p (cid:1) Q and observe that Q is in normal form. Now P = P p (cid:1) P (cid:52) Q p (cid:1) Q = Q and we conclude Γ ‘ Q with one application of t-choice . P = ( x ) P From t-new we deduce Γ , x : h p i ‘ P for some p . Using the induction hypothesiswe deduce that there exists Q in normal form such that P (cid:52) Q and Γ , x : h p i ‘ Q . Let Q def = ( x ) Q and observe that Q is in normal form. Now P = ( x ) P (cid:52) ( x ) Q = Q and weconclude Γ ‘ Q with one application of t-new . P = P | P From t-par we deduce Γ = Γ , Γ , x : h (cid:74) T (cid:75) i and Γ , x : T ‘ P and Γ , x : T ‘ P . Using the induction hypothesis we deduce that there exist Q and Q in normal formsuch that P i (cid:52) Q i for i = 1 , , x : T ‘ Q and Γ , x : T ‘ Q . We conclude usingLemma D.9. (cid:74) We now have almost all the ingredients for proving Theorem 4.5. The only aspect we haveto consider is that the proof will be an induction on the structure of the typing derivation,hence the property that the process is well typed in the empty context is not general enoughto apply the induction hypothesis. We generalize Theorem 4.5 by considering processes thatare well typed in balanced contexts, assuring us that all the session endpoints are used. . Inverso and H. Melgratti and L. Padovani and C. Trubiani and E. Tuosto 36:31 (cid:73)
Definition D.11 (balanced type) . We say that t is balanced and we write bal ( t ) if either un ( t ) or t has the form h p i for some p . We write bal (Γ) if bal (Γ( x )) for every x ∈ dom (Γ) . (cid:73) Lemma D.12. If bal (Γ) and Γ ‘ P and P (cid:88) → , then P ↓ . Proof.
Without loss of generality, we may assume that P is an exposed process. Indeed:If P is not in normal form, then Lemma D.10 allows us to rewrite P into a normal formprocess that is well typed in the same Γ.If P is in normal form but not exposed, then it consists of top-level session restrictionsand process distributions containing exposed processes, each of which is well typed in abalanced context and none of which reduces.From the hypothesis P (cid:88) → we deduce that none of the sequential processes in P is aprocess invocation. Therefore, P is a parallel composition of P , . . . , P n , Q , . . . , Q m wherethe P i are prefixed processes and the Q j are either idle or of the form done x . From Γ ‘ P and t-par we deduce that there exist Γ , . . . , Γ n , ∆ , . . . , ∆ m such that Γ i ‘ P i for every1 ≤ i ≤ n and ∆ j ‘ Q j for every 1 ≤ j ≤ m . Also, we let x i be the channel that occurs inthe prefix of P i . Clearly, x i ∈ dom (Γ i ). We proceed by contradiction, assuming that n = 0.It must be the case that the x i are pairwise distinct. Indeed, if x i = x j , then x i and x j would be the two peer endpoints of the same session performing complementary actions,by Lemma D.5 we would be able to move the two processes using x i and x j next to eachother and P would be able to reduce, thus contradicting the hypothesis P (cid:88) → . Also, from thederivation of Γ ‘ P we can build a derivation of Γ (cid:35) . . . (cid:35) Γ n (cid:35) ∆ (cid:35) . . . ∆ m ‘ Γ according toDefinition D.1.The sub-structural nature of the type system and the hypothesis bal (Γ) ensure that eachsession name occurs exactly twice. Therefore, each x i must also occur free in some other P j with j = i . We let f : [1 , n ] → [1 , n ] be the function that maps i to the index of the processin which x i occurs free. That is, x i ∈ fn ( P f ( i ) ) for every 1 ≤ i ≤ n . Note that f ( i ) = i bydefinition of f . Now we build the following infinite sequence of names x , x f (1) , x f ( f (1)) , x f ( f ( f (1))) , . . . Since there are n distinct names x i and f ( i ) = i , there are at least two names that occurinfinitely often in this sequence. Consequently, the hyper-context Γ (cid:35) . . . (cid:35) Γ n (cid:35) ∆ (cid:35) . . . (cid:35) ∆ m must have a cycle in the sense of Definition D.2, which contradicts Proposition D.3. (cid:74)(cid:73) Theorem 4.5 (deadlock freedom) . If ∅ ‘ P and P ⇒ Q , then either Q → or Q ↓ . Proof.
Immediate consequence of Theorem 4.3 and Lemma D.12. (cid:74)
E Proof of Theorem 4.8 (cid:73)
Lemma E.1. If Γ , x : T ‘ P and P ↓ , then P ↑ x (cid:74) T (cid:75) . Proof.
By induction on the derivation of Γ , x : T ‘ P and by cases on the last rule applied.We only discuss those cases that are compatible with the hypothesis P ↓ . t-idle Then P = idle and un ( T ), hence T = ◦ . We conclude P ↑ x noting that (cid:74) T (cid:75) = 0. t-done Then P = done y . We distinguish two subcases. If x = y , then T = • and weconclude P ↑ x noting that (cid:74) T (cid:75) = 1. If x = y , then we have un ( T ), hence T = ◦ and weconclude as in the case of rule t-idle . C O N C U R 2 0 2 0 t-par
Then P = P | P and Γ , x : T = Γ , Γ , y : h (cid:74) S (cid:75) i and Γ , y : S ‘ P and Γ , y : S ‘ P .It must be the case that x ∈ dom (Γ i ) for some i ∈ { , } . We conclude using the inductionhypothesis on P i . t-choice Then there exist P , P , Γ , Γ , T and T such that P = P p (cid:1) P andΓ , x : T = Γ , x : T p (cid:1) Γ , x : T and Γ i , x : T i ‘ P i for i = 1 ,
2. From P ↓ we deduce P i ↓ for i = 1 ,
2. Using the induction hypothesis we deduce P i ↑ x (cid:74) T i (cid:75) for i = 1 ,
2, hence P ↑ xp (cid:74) T (cid:75) +(1 − p ) (cid:74) T (cid:75) by Definition 4.6. We conclude P ↑ x (cid:74) T (cid:75) using Proposition 3.6. t-new Then there exist Q , y and p such that P = ( y ) Q and Γ , y : h p i , x : T ‘ Q . From P ↓ we deduce Q ↓ . We conclude using the induction hypothesis. (cid:74)(cid:73) Theorem 4.8. If x : h p i ‘ P and P (cid:88) → , then P ↑ xp . Proof.
From Lemma D.12 we deduce P ↓ . We prove that Γ , x : h p i ‘ P and P ↓ imply P ↑ xp by induction on the derivation of Γ , x : h p i ‘ P and by cases on the last rule applied. Weonly consider those cases that are compatible with the assumption x : h p i . t-par when the name being split is x Then there exist P , P , Γ , Γ and T such that P = P | P and Γ = Γ , Γ and Γ , x : T ‘ P and Γ , x : T ‘ P and p = (cid:74) T (cid:75) . From P ↓ wededuce P ↓ . We conclude using Lemma E.1. t-par when the name being split is some y = x Then there exist P , P , Γ , Γ and T suchthat P = P | P and Γ , x : h p i = Γ , Γ , y : h (cid:74) T (cid:75) i and Γ , y : T ‘ P and Γ , y : T ‘ P . Weonly discuss the case x ∈ dom (Γ ), the other being analogous. Then Γ = Γ , x : h p i for someΓ . From P ↓ we deduce P ↓ . We conclude using the induction hypothesis. t-choice Then there exist P , P , Γ , Γ , q , p and p such that P = P q (cid:1) P andΓ , x : h p i = (Γ , x : h p i ) q (cid:1) (Γ , x : h p i ) and Γ i , x : h p i i ‘ P i for i = 1 ,
2. In particular, p = qp + (1 − q ) p . From P ↓ we deduce P i ↓ for i = 1 ,
2. Using the induction hypothesiswe deduce P i ↑ xp i for i = 1 ,
2, hence we conclude P ↑ xqp +(1 − q ) p . t-new Then there exist y , p , Q such that P = ( y ) Q and Γ , y : h q i , x : h p i ‘ Q . From P ↓ wededuce Q ↓ . We conclude using the induction hypothesis. (cid:74)(cid:73) Corollary 4.9 (relative success) . Let P ⇑ xp if there exist ( P n ) and ( p n ) such that P ⇒ P n and P n ↑ xp n for all n ∈ N and lim n →∞ p n = p . Then (1) x : h i ‘ P and P ⇓ p imply P ⇑ xp and (2) x : h p i ‘ P and P ⇓ imply P ⇑ xp . Proof.
We prove the two items separately.Item 1 From the hypothesis P ⇓ p we know that there exist ( Q n ), ( R n ) and ( p n ) such that P ⇒ Q n p n (cid:1) R n and Q n ↓ for every n ∈ N and lim n →∞ p n = p . From the hypothesis x : h i ‘ P and Theorem 4.3 we deduce x : h i ‘ Q n p n (cid:1) R n for every n ∈ N . From t-choice and Definition 3.5 we deduce x : h i ‘ Q n for every n ∈ N . From Q n ↓ and Theorem 4.8 wededuce Q n ↑ x . Using Definition 4.6 we derive Q n p n (cid:1) R n ↑ xp n for every n ∈ N , hence P ⇑ xp .Item 2 From the hypothesis P ⇓ we know that there exist ( Q n ), ( R n ) and ( p n ) such that P ⇒ Q n p n (cid:1) R n and Q n ↓ for every n ∈ N and lim n →∞ p n = p . That is, for every ε > N such that, for every n ≥ N , we have 1 − p n < ε . From the hypothesis x : h p i ‘ P and Theorem 4.3 we deduce x : h p i ‘ Q n p n (cid:1) R n for every n ∈ N . From t-choice and Definition 3.5 we deduce that, for every n ∈ N , there exist q n and r n such that p = p n q n + (1 − p n ) r n and x : h q n i ‘ Q n and x : h r n i ‘ R n . From Q n ↓ and Theorem 4.8we deduce Q n ↑ xq n for every n ∈ N , hence ( Q n p n (cid:1) R n ) ↑ xp n q n for every n ∈ N . Now p − p n q n = p n q n + (1 − p n ) r n − p n q n = (1 − p n ) r n < ε , hence lim n →∞ p n q n = p ..