Information Flow Safety in Multiparty Sessions
Sara Capecchi, Ilaria Castellani, Mariangiola Dezani-Ciancaglini
BB. Luttik and F. D. Valencia (Eds.): 18th International Workshop onExpressiveness in Concurrency (EXPRESS 2011)EPTCS 64, 2011, pp. 16–30, doi:10.4204/EPTCS.64.2
Information Flow Safety in Multiparty Sessions
Sara Capecchi
Dipartimento di InformaticaUniversità di Torino, corso Svizzera 185, 10149 Torino, Italy ∗ [email protected] Ilaria Castellani
INRIA2004 route des Lucioles, 06902 Sophia Antipolis, France [email protected]
Mariangiola Dezani-Ciancaglini
Dipartimento di InformaticaUniversità di Torino, corso Svizzera 185, 10149 Torino, Italy [email protected]
Abstract
We consider a calculus for multiparty sessions enriched with security levels for messages. Wepropose a monitored semantics for this calculus, which blocks the execution of processes as soon as theyattempt to leak information. We illustrate the use of our monitored semantics with various examples, andshow that the induced safety property implies a noninterference property studied previously.
Keywords : concurrency, session calculi, secure information flow, monitored semantics, safety.
With the advent of web technologies, we are faced today with a powerful computing environment whichis inherently parallel, distributed and heavily relies on communication. Since computations take placeconcurrently on several heterogeneous devices, controlled by parties which possibly do not trust eachother, security properties such as confidentiality and integrity of data become of crucial importance.A session is an abstraction for various forms of “structured communication” that may occur in aparallel and distributed computing environment. Examples of sessions are a client-service negotiation, afinancial transaction, or an interaction among different services within a web application.
Session types ,which specify the expected behaviour of participants in sessions, were originally introduced in [15], ona variant of the π -calculus [11] including a construct for session creation and two n -ary operators oflabelled internal and external choice, called selection and branching. The basic properties ensured bysession types are the absence of communication errors (communication safety) and the conformance tothe session protocol (session fidelity). Since then, more powerful session calculi have been investigated,allowing delegation of tasks among participants and multiparty interaction within a single session, andequipped with increasingly sophisticated session types, ensuring additional properties like progress.In previous work [5], we addressed the question of incorporating security requirements into sessiontypes. To this end, we considered a calculus for multiparty sessions with delegation, enriched with se-curity levels for both session participants and data. We proposed a session type system for this calculus,adding access control and secure information flow requirements in the typing rules in order to guaranteethe preservation of data confidentiality during session execution. ∗ Work partially funded by the ANR-08-EMER-010 grant PARTOUT, and by the MIUR Projects DISCO and IPODS. . Capecchi, I Castellani & M. Dezani-Ciancaglini monitored semantics ,which blocks the execution of processes as soon as they attempt to leak information, raising an error.Typically, this happens when a process tries to participate in a public communication after receiving ortesting a secret value. This monitored semantics induces a natural notion of safety on processes: a pro-cess is safe if all its monitored computations are successful (in a dynamically evolving environment andin the presence of a passive attacker, which may only change secret information at each step).Expectedly, this monitored semantics is closely related to the security type system presented in [5].Indeed, some of the constraints imposed by the monitored operational rules are simply lifted from thetyping rules. However, there are two respects in which the constraints of the monitoring semantics aresimpler. First, they refer to individual computations. In other words, they are local whereas type con-straints are both local and global . Second, one of these constraints (the lower bound for the level ofcommunications in a given session) may be dynamically computed during execution, and hence servicesdo not need to be statically annotated with levels, as was required by the type system of [5]. This meansthat the language itself may be simplified when the concern is on safety rather than typability.Other advantages of safety over typability are not specific to session calculi. Like security, safetyis a semantic notion. Hence it is more permissive than typability in that it ignores unreachable parts ofprocesses: for instance, in our setting, a high conditional with a low branch will be ruled out by the typesystem but will be considered safe if the low branch is never taken. Safety also offers more flexibility than types in the context of web programming, where security policies may change dynamically.Compared to security, safety has again the advantage of locality versus globality. In session calculi,it also improves on security in another respect. Indeed, in these calculi processes communicate asyn-chronously and messages transit in queues before being consumed by their receivers. Then, while themonitored semantics blocks the very act of putting a public message in the queue after a secret messagehas been received, a violation of the security property can only be detected after the public message hasbeen put in the queue, that is, after the confidentiality breach has occurred, and possibly already causeddamage. This means that safety allows early leak detection, whereas security only allows late detection.Finally, safety seems more appealing than security when the dangerous behaviour comes from anaccidental rather than an intentional transgression of the security policy. Indeed, in this case a monitoredsemantics could offer useful feedback to the programmer, in the form of informative error messages.Although this possibility is not explored in the present paper, it is the object of ongoing work.The main contribution of this work is a monitored semantics for a multiparty session calculus, andthe proof that the induced information flow safety property strictly implies the information flow security property of [5]. While the issue of safety has recently received much attention in the security community(see Section 7), it has not, to our knowledge, been addressed in the context of session calculi so far.The rest of the paper is organised as follows. In Section 2 we motivate our approach with an example.Section 3 introduces the syntax and semantics of our calculus. In Section 4 we recall the definition of se-curity from [5] and illustrate it with examples. Section 5 presents our monitored semantics and Section 6introduces our notion of safety and establishes its relation to security. Finally, Section 7 concludes witha discussion on related and future work.
Let us illustrate our approach with an introductory example, inspired by [2]. Suppose we want to modelthe interaction between an online health service S and a user U . Each time the user wishes to consult theservice, she opens a connection with the server and sends him her username (here by convention we shalluse “she” for the user and “he” for the server), assuming she has previously registered with the service.She may then choose between two kinds of service:8 Information flow safety in multiparty sessions I = ¯ a [ ] U = a [ ]( α ) . α ! (cid:104) , un ⊥ (cid:105) . if simple ⊥ then α ⊕ ⊥ (cid:104) , sv1 (cid:105) . α ! (cid:104) , que ⊥ (cid:105) . α ? ( , ans ⊥ ) . else α ⊕ ⊥ (cid:104) , sv2 (cid:105) . α ! (cid:104) , pwd (cid:62) (cid:105) . α ? ( , form (cid:62) ) . if gooduse ( form (cid:62) ) then α ! (cid:104) , que (cid:62) (cid:105) . α ? ( , ans (cid:62) ) . else α ! (cid:104) , que ⊥ (cid:105) . α ? ( , ans ⊥ ) . S = a [ ]( α ) . α ? ( , un ⊥ ) . α & ⊥ ( , { sv1 : α ? ( , que ⊥ ) . α ! (cid:104) , ans ⊥ (cid:105) . , sv2 : α ? ( , pwd (cid:62) ) . α ! (cid:104) , form (cid:62) (cid:105) . α ? ( , que (cid:62) ) . α ! (cid:104) , ans (cid:62) (cid:105) . } Figure 1: The online medical service example.1. simple consultation : the user asks questions from the medical staff. Questions and answers are public,for instance they can be published in a forum visible to every user. The staff has no privacy constraint.2. medical consultation : the user sends questions together with private data (e.g., results of medicalexams) to the medical staff, in order to receive a diagnosis or advice about medicines or further exams.To access these features she must enter a password, and wait for a secure form on which to send herdata. Here questions and answers are secret (modelling the fact that the they are sent in a secure modeand that the staff is compelled to maintain privacy).More precisely, this interaction may be described by the following protocol, in which we add the possi-bility that the user accidentally reveals her private information:1. U opens a connection with S and sends her username to S ;2. U chooses between Service 1 and Service 2;3.a Service 1: U sends a question to S and waits for an answer;3.b Service 2: U sends her password to S and waits for a secure form. She then sends her question anddata on the form and waits for an answer from S . A reliable user will use the form correctly and senddata in a secure mode. Instead, an unreliable user will forget to use the form, or use it wrongly, thusleaking some of her private data. This may result in private information being sent to a public forumor to medical staff which is not compelled to maintain privacy.In our calculus, this scenario may be described as the parallel composition of the processes in Figure 1.A session is an activation of a service, involving a number of participants with predefined roles.Here processes U and S communicate by opening a session on service a . The initiator ¯ a [ ] specifies thatthe number of participants is 2. Participants are denoted by integers: here U =1, S =2. In process U , theprefix a [ ]( α ) means that U wants to act as participant 1 in service a , using channel α to communicate.Dually, in S , a [ ]( α ) means that S will act as participant 2 in service a , communicating via channel α .Security levels appear as superscripts on both data and some operators (here ⊥ means “public” and (cid:62) means “secret”): the user name un and the message contents in Service 1 can be public; the password pwd and the information exchanged in Service 2 should be secret. Levels on the operators are needed totrack indirect flows, as will be explained in Section 6. They may be ignored for the time being.When the session is established, via a synchronisation between the initiator and the prefixes a [ i ]( α i ) , U sends to S her username un ⊥ . Then, according to whether she wishes a simple consultation or not,she chooses between the two services sv1 ⊥ and sv2 ⊥ . This choice is expressed by the internal choiceconstruct ⊕ : if simple ⊥ then α ⊕ ⊥ (cid:104) , sv1 (cid:105) . . . else α ⊕ ⊥ (cid:104) , sv2 (cid:105) . . . describes a process sending on α to participant 2 either label sv1 or sv2 , depending on the value of simple ⊥ . If U chooses sv1 , then . Capecchi, I Castellani & M. Dezani-Ciancaglini S a question (construct α ! (cid:104) , que ⊥ (cid:105) ), receives the answer (construct α ? ( , ans ⊥ ) ). If U chooses sv2 , then she sends her password to S and then waits to get back a secure form. At this point,according to her reliability ( if gooduse ( form (cid:62) ) ) she either sends her question and data in the secureform, or wrongly sends them in clear. The difference between the secure and insecure exchanges ismodelled by the different security levels tagging values and variables in the prefixes α ! (cid:104) , que (cid:62) / ⊥ (cid:105) and α ? ( , ans (cid:62) / ⊥ ) .Dually, process S receives the username from U and then waits for her choice of either label sv1 orlabel sv2 . This is described by the external choice operator &: & ⊥ ( , { sv1 : . . . , sv2 : . . . ) expresses thereception of the label sv1 or of the label sv2 from participant 1. In the first case, S will then receive aquestion and send the answer. This whole interaction is public. In the second case, S receives a passwordand sends a form, and then receives a question and sends the answer. In this case the interaction is secret.Note that the execution of process I | S | U may be insecure if U is unreliable. Indeed, in U ’s code, thetest on gooduse ( form (cid:62) ) uses the secret value form (cid:62) . Now, for security to be granted in the subsequentexecution, all communications depending on form (cid:62) should be secret. However, this will not be the caseif the second branch of the conditional is taken, since in this case U sends a public question. On the otherhand, the execution is secure when the first service is used, or when the second service is used properly.This process is rejected by the type system of [5], which must statically ensure the correction ofall possible executions. Similarly, the security property of [5] fails to hold for this process, since twodifferent public behaviours may be exhibited after testing the secret value gooduse ( form (cid:62) ) : in one casethe empty behaviour, in the other case the emission of que ⊥ . Moreover, the bisimulation used to checksecurity will fail only once que ⊥ has been put in the queue, and thus possibly exploited by an attacker.By contrast, the monitored semantics will block the very act of putting que ⊥ in the queue.For the sake of conciseness, we deliberately simplified the scenario in the above example, by usingfinite services and a binary session between a server and a user. Note that several such sessions could runin parallel, each corresponding to a different impersonation of the user. A more realistic example wouldinvolve persistent services and allow several users to interact within the same session, and the serverto delegate the question handling to the medical staff. This would bring into the scene other importantfeatures of our calculus, namely multiparty interaction and the mechanism of delegation. Our simpleexample is mainly meant to highlight the novel issue of monitored execution. Our calculus is essentially the same as that studied in [5]. For the sake of simplicity, we do not considerhere access control and declassification, although their addition would not pose any problem.Let ( S , ≤ ) be a finite lattice of security levels , ranged over by (cid:96), (cid:96) (cid:48) . We denote by (cid:116) and (cid:117) the joinand meet operations on the lattice, and by ⊥ and (cid:62) its minimal and maximal elements. We assume thefollowing sets: values (booleans, integers), ranged over by v , v (cid:48) . . . , value variables , ranged over by x , y . . . , service names , ranged over by a , b , . . . , each of which has an arity n ≥ service name variables , ranged over by ζ , ζ (cid:48) , . . . , identifiers , i.e., service names and value variables,ranged over by u , w , . . . , channel variables , ranged over by α , β , . . . , and labels , ranged over by λ , λ (cid:48) , . . . (acting like labels in labelled records). Sessions , the central abstraction of our calculus, are denotedwith s , s (cid:48) . . . . A session represents a particular instance or activation of a service. Hence sessions onlyappear at runtime. We use p , q ,. . . to denote the participants of a session. In an n -ary session (a sessioncorresponding to an n -ary service) p , q are assumed to range over the natural numbers 1 , . . . , n . We denoteby Π a non empty set of participants. Each session s has an associated set of channels with role s [ p ] ,one for each participant. Channel s [ p ] is the private channel through which participant p communicateswith the other participants in the session s . A new session s on an n -ary service a is opened when the0 Information flow safety in multiparty sessions r ::= a || s Service/Session Name c ::= α || s [ p ] Channel u ::= ζ || a Identifier v ::= true || false || . . . Value e ::= x (cid:96) || v (cid:96) || not e || e and e (cid:48) || . . . Expression D ::= X ( x , α ) = P Declaration Π ::= { p } || Π ∪ { p } Set of participants ϑ ::= v (cid:96) || s [ p ] (cid:96) || λ (cid:96) Message content m ::= ( p , Π , ϑ ) Message in transit h ::= m · h || ε Queue H ::= H ∪ { s : h } || /0 Q -set P ::= ¯ u [ n ] n -ary session initiator || u [ p ]( α ) . P p -th session participant || c ! (cid:104) Π , e (cid:105) . P Value send || c ? ( p , x (cid:96) ) . P Value receive || c ! (cid:96) (cid:104) Π , u (cid:105) . P Service name send || c ? (cid:96) ( p , ζ ) . P Service name receive || c ! (cid:96) (cid:104)(cid:104) q , c (cid:48) (cid:105)(cid:105) . P Channel send || c ? (cid:96) (( p , α )) . P Channel receive || c ⊕ (cid:96) (cid:104) Π , λ (cid:105) . P Selection || c & (cid:96) ( p , { λ i : P i } i ∈ I ) Branching || if e then P else Q Conditional || P | Q Parallel || Inaction || ( ν a ) P Name hiding || def D in P Recursion || X (cid:104) e , c (cid:105) Process call
Table 1: Syntax of processes, expressions and queues. initiator ¯ a [ n ] of the service synchronises with n processes of the form a [ ]( α ) . P , . . . , a [ n ]( α n ) . P n , whosechannels α p then get replaced by s [ p ] in the body of P p . While binary sessions may often be viewed asan interaction between a user and a server, multiparty sessions do not exhibit the same asymmetry. Thisis why we use of an initiator to start the session once all the required “peer” participants are present.We use c to range over channel variables and channels with roles. Finally, we assume a set of processvariables X , Y , . . . , in order to define recursive behaviours.As in [9], in order to model TCP-like asynchronous communications (with non-blocking send butmessage order preservation between a given pair of participants), we use queues of messages , denoted by h ; an element of h may be one of the following: a value message ( p , Π , v (cid:96) ) , indicating that the value v (cid:96) issent by participant p to all participants in Π ; a service name message ( p , Π , a (cid:96) ) , with a similar meaning;a channel message ( p , q , s [ p (cid:48) ] (cid:96) ) , indicating that p delegates to q the role of p (cid:48) with level (cid:96) in session s ; anda label message ( p , Π , λ (cid:96) ) , indicating that p selects the process with label λ among those offered by theset of participants Π . The empty queue is denoted by ε , and the concatenation of a message m to a queue h by h · m . Conversely, m · h means that m is the head of the queue. Since there may be interleaved, nestedand parallel sessions, we distinguish their queues with names. We denote by s : h the named queue h associated with session s . We use H , K to range over sets of named queues with different session names,also called Q -sets.Table 1 summarises the syntax of expressions , ranged over by e , e (cid:48) , . . . , and of processes , ranged overby P , Q . . . , as well as the runtime syntax of the calculus (sessions, channels with role, messages, queues).Let us briefly comment on the primitives of the language. We already described session initiation.Communications within a session are performed on a channel using the next four pairs of primitives: thesend and receive of a value; the send and receive of a service name; the send and receive of a channel(where one participant transmits to another the capability of participating in another session with a givenrole) and the selection and branching operators (where one participant chooses one of the branchesoffered by another participant). Apart from the value send and receive constructs, all the send/receiveand choice primitives are decorated with security levels, whose use will be justified later. When there isno risk of confusion we will omit the set delimiters { , } , particularly around singletons.The operational semantics consists of a reduction relation on configurations < P , H > , which arepairs of a process P and a Q -set H . Indeed, queues need to be isolated from processes in our calculus(unlike in other session calculi, where queues are handled by running them in parallel with processes),since they will be the observable part of processes in our security and safety notions. . Capecchi, I Castellani & M. Dezani-Ciancaglini a [ ]( α ) . P | ... | a [ n ]( α n ) . P n | ¯ a [ n ] −→ ( ν s ) < P { s [ ] / α } | ... | P n { s [ n ] / α n } , s : ε > [Link] < s [ p ] ! (cid:104) Π , e (cid:105) . P , s : h > −→ < P , s : h · ( p , Π , v (cid:96) ) > where e ↓ v (cid:96) [SendV] < s [ q ] ? ( p , x (cid:96) ) . P , s : ( p , q , v (cid:96) ) · h > −→ < P { v / x } , s : h > [RecV] < s [ p ] ! (cid:96) (cid:104) Π , a (cid:105) . P , s : h > −→ < P , s : h · ( p , Π , a (cid:96) ) > [SendS] < s [ q ] ? (cid:96) ( p , ζ ) . P , s : ( p , q , a (cid:96) ) · h > −→ < P { a / ζ } , s : h > [RecS] < s [ p ] ! (cid:96) (cid:104)(cid:104) q , s (cid:48) [ p (cid:48) ] (cid:105)(cid:105) . P , s : h > −→ < P , s : h · ( p , q , s (cid:48) [ p (cid:48) ] (cid:96) ) > [SendC] < s [ q ] ? (cid:96) (( p , α )) . P , s : ( p , q , s (cid:48) [ p (cid:48) ] (cid:96) ) · h > −→ < P { s (cid:48) [ p (cid:48) ] / α } , s : h > [RecC] < s [ p ] ⊕ (cid:96) (cid:104) Π , λ (cid:105) . P , s : h > −→ < P , s : h · ( p , Π , λ (cid:96) ) > [Label] < s [ q ] & (cid:96) ( p , { λ i : P i } i ∈ I ) , s : ( p , q , λ (cid:96) i ) · h > −→ < P i , s : h > where ( i ∈ I ) [Branch] if e then P else Q −→ P where e ↓ true (cid:96) if e then P else Q −→ Q where e ↓ false (cid:96) [If-T, If-F] def X ( x , α ) = P in X (cid:104) e , s [ p ] (cid:105) −→ def X ( x , α ) = P in P { v (cid:96) / x }{ s [ p ] / α } where e ↓ v (cid:96) [Def] < P , H > −→ ( ν ˜ s ) < P (cid:48) , H (cid:48) > ⇒ < def D in ( P | Q ) , H > −→ ( ν ˜ s ) < def D in ( P (cid:48) | Q ) , H (cid:48) > [Defin] C −→ ( ν ˜ s ) C (cid:48) ⇒ ( ν ˜ r )( C (cid:107) C (cid:48)(cid:48) ) −→ ( ν ˜ r )( ν ˜ s )( C (cid:48) (cid:107) C (cid:48)(cid:48) ) [Scop]Table 2: Standard reduction rules.A configuration is a pair C = < P , H > of a process P and a Q -set H , possibly restricted with respectto service and session names, or a parallel composition ( C (cid:107) C (cid:48) ) of two configurations whose Q -sets havedisjoint session names. In a configuration ( ν s ) < P , H > , all occurrences of s [ p ] in P and H and of s in H are bound. By abuse of notation we often write P instead of < P , /0 > .As usual, the operational semantics is defined modulo a structural equivalence ≡ . The structural rulesfor processes are standard [11]. Among the rules for queues, we have one for commuting independentmessages and another one for splitting a message for multiple recipients. The structural equivalence ofconfigurations allows the parallel composition (cid:107) to be eliminated via the rule: ( ν ˜ r ) < P , H > (cid:107) ( ν ˜ r (cid:48) ) < Q , K > ≡ ( ν ˜ r ˜ r (cid:48) ) < P | Q , H ∪ K > where by hypothesis the session names in the Q -sets H , K are disjoint, by Barendregt convention ˜ r and˜ r (cid:48) have empty intersection and there is no capture of free names, and ( ν ˜ r ) C stands for ( ν r ) · · · ( ν r k ) C ,if ˜ r = r · · · r k . Note that, modulo ≡ , each configuration has the form ( ν ˜ r ) < P , H > .The transitions for configurations have the form C −→ C (cid:48) . They are derived using the reduction rules2 Information flow safety in multiparty sessions in Table 2, where we write P as short for < P , /0 > .Rule [Link] describes the initiation of a new session among n processes, corresponding to an activa-tion of the service a of arity n . After the connection, the participants share a private session name s andthe corresponding queue, initialised to s : ε . In each participant P p , the channel variable α p is replacedby the channel with role s [ p ] . This is the only synchronous interaction of the calculus. All the othercommunications, which take place within an established session, are performed asynchronously in twosteps, via push and pop operations on the queue associated with the session.The output rules [SendV], [SendS], [SendC] and [Label] push values, service names, channels andlabels, respectively, into the queue s : h . In rule [SendV], e ↓ v (cid:96) denotes the evaluation of the expression e to the value v (cid:96) , where (cid:96) is the join of the security levels of the variables and values occurring in e .The input rules [RecV], [RecS], [RecC] and [Branch] perform the complementary operations. Rules[If-T], [If-F], [Def] and [Defin] are standard. The contextual rule [Scop] is also standard. In this rule,Barendregt convention ensures that the names in ˜ s are disjoint from those in ˜ r and do not appear in C (cid:48)(cid:48) .As usual, we use −→ ∗ for the reflexive and transitive closure of −→ .We assume that communication safety and session fidelity are assured by a standard session typesystem [9] As in [5], we assume that the observer can see the messages in session queues. As usual for security,observation is relative to a given downward-closed set of levels L ⊆ S , the intuition being that an ob-server who can see messages of level (cid:96) can also see all messages of level (cid:96) (cid:48) lower than (cid:96) . In the following,we shall always use L to denote a downward-closed subset of levels. For any such L , an L -observerwill only be able to see messages whose levels belong to L , what we may call L -messages . Hence twoqueues that agree on L -messages will be indistinguishable for an L -observer. Let now L -messages be the complementary messages, those the L -observer cannot see. Then, an L -observer may also beviewed as an attacker who tries to reconstruct the dependency between L -messages and L -messages(and hence, ultimately, to discover the L -messages), by injecting himself different L -messages at eachstep and observing their effect on L -messages.To formalise this intuition, a notion of L -equality = L on Q -sets is introduced, representing indis-tinguishability of Q -sets by an L -observer. Based on = L , a notion of L -bisimulation (cid:39) L formalisesindistinguishability of processes by an L -observer. Formally, a queue s : h is L -observable if it containssome message with level in L . Then two Q -sets are L -equal if their L -observable queues have thesame names and contain the same messages with level in L . This equality is based on an L -projectionoperation on Q -sets, which discards all messages whose level is not in L .Let the function lev be given by: lev ( v (cid:96) ) = lev ( a (cid:96) ) = lev ( s [ p ] (cid:96) ) = lev ( λ (cid:96) ) = (cid:96) . Definition 4.1 ( L -Projection) The projection operation ⇓ L is defined inductively on messages, queuesand Q -sets as follows: ( p , Π , ϑ ) ⇓ L = (cid:40) ( p , Π , ϑ ) if lev ( ϑ ) ∈ L , ε otherwise ε ⇓ L = ε ( m · h ) ⇓ L = m ⇓ L · h ⇓ L /0 ⇓ L = /0 ( H ∪ { s : h } ) ⇓ L = (cid:40) H ⇓ L ∪ { s : h ⇓ L } if h ⇓ L (cid:54) = ε , H ⇓ L otherwise Definition 4.2 ( L -Equality of Q-sets) Two Q -sets H and K are L - equal , written H = L K, if H ⇓ L = K ⇓ L . . Capecchi, I Castellani & M. Dezani-Ciancaglini L -equal queues. However, wecannot allow arbitrary combinations of processes with queues, since this would lead us to reject intu-itively secure processes as simple as s [ ] ? ( , x ⊥ ) . and s [ ] ! (cid:104) , true ⊥ (cid:105) . . As argued in [5], we may getaround this problem by imposing two simple conditions, one on Q -sets ( monotonicity ) and the other onconfigurations ( saturation ). These conditions are justified by the fact that they are always satisfied ininitial computations generated by typable processes (in the sense of [5]).The first condition requires that in a Q -set, the security levels of messages with the same sender andcommon receivers should never decrease along a sequence. Definition 4.3 (Monotonicity)
A queue is monotone if lev ( ϑ ) ≤ lev ( ϑ ) whenever the message ( p , Π , ϑ ) precedes the message ( p , Π , ϑ ) in the queue and Π ∩ Π (cid:54) = /0 . The second condition requires that in a configuration, the Q -set should always contain enough queuesto enable all outputs of the process to reduce. Definition 4.4 (Saturation)
A configuration (cid:104) P , H (cid:105) is saturated if each session name s occurring in Phas a corresponding queue s : h in H. We are now ready for defining our L -bisimulation, expressing indistinguishability by an L -observer.Unlike early definitions of L -bisimulation, which only allowed the “high state” to be changed at the startof computation, our definition allows it to be changed at each step, to account for dynamic contexts [7]. Definition 4.5 ( L -Bisimulation) A symmetric relation R ⊆ ( P r × P r ) is a L -bisimulation if P R P implies, for any pair of monotone Q -sets H and H such that H = L H and each < P i , H i > is saturated:If < P , H > −→ ( ν ˜ r ) < P (cid:48) , H (cid:48) > , then there exist P (cid:48) , H (cid:48) such that < P , H > −→ ∗ ≡ ( ν ˜ r ) < P (cid:48) , H (cid:48) > , where H (cid:48) = L H (cid:48) and P (cid:48) R P (cid:48) . Processes P , P are L -bisimilar , P (cid:39) L P , if P R P for some L -bisimulation R . Note that ˜ r may either be the empty string or a single name, since it appears after a one-step transition. If itis a name, it may either be a service name a (communication of a private service) or a fresh session name s (opening of a new session). In the latter case, s cannot occur in P and H by Barendregt convention.Intuitively, a transition that adds or removes an L -message must be simulated in one or more steps,producing the same effect on the Q -set, whereas a transition that does not affect L -messages may besimulated by inaction. In such case, the structural equivalence ≡ may be needed in case the first processhas created a restriction. The notions of L -security and security are now defined in the standard way: Definition 4.6 (Security)
1. A process is L -secure if it is L -bisimilar with itself.2. A process is secure if it is L -secure for every L . The need for considering all downward-closed sets L is justified by the following example. Example 4.7
Let S = {⊥ , (cid:96), (cid:62)} where ⊥ ≤ (cid:96) ≤ (cid:62) andP = ¯ a [ ] | a [ ]( α ) . P | a [ ]( α ) . P P = α ? ( , x (cid:62) ) . if x (cid:62) then α ! (cid:104) , false (cid:96) (cid:105) . else α ! (cid:104) , true (cid:96) (cid:105) . P = α ! (cid:104) , true (cid:62) (cid:105) . The process P is {⊥} -secure and S -secure, but it is not {⊥ , (cid:96) } -secure, since there is a flow from level (cid:62) to level (cid:96) in P , which is detectable by a {⊥ , (cid:96) } -observer but not by a {⊥} -observer. We let the readerverify this fact formally, possibly after looking at the next example. We show next that an input of level (cid:96) should not be followed by an action of level (cid:96) (cid:48) (cid:54)≥ (cid:96) :4 Information flow safety in multiparty sessions
Example 4.8 (Insecurity of high input followed by low action)
Consider the process P and the Q -sets H and H , where H = {⊥} H :P = s [ ] ? ( , x (cid:62) ) . s [ ] ! (cid:104) , true ⊥ (cid:105) . , H = { s : ( , , true (cid:62) ) } H = { s : ε } Here we have < P , H > −→ < s [ ] ! (cid:104) , true ⊥ (cid:105) . , { s : ε } > = < P , H (cid:48) > , while < P , H > (cid:54)−→ . SinceH (cid:48) = { s : ε } = H , we can proceed with P = s [ ] ! (cid:104) , true ⊥ (cid:105) . and P = P. Take now K = K = { s : ε } .Then < P , K > −→ < , { s : ( , , true ⊥ ) } > , while < P , K > (cid:54)−→ . Since K (cid:48) = { s : ( , , true ⊥ ) } (cid:54) = {⊥} { s : ε } = K , P is not {⊥} -secure.With a similar argument we may show that Q = s [ ] ? ( , x (cid:62) ) . s [ ] ? ( , y ⊥ ) . is not {⊥} -secure. The need for security levels on value variables are justified by the following example.
Example 4.9 (Need for levels on value variables)
Suppose we had no levels on value variables. Consider the process, which should be secure:P = s [ ] ? ( , x ) . s [ ] ? ( , y ) . | s [ ] ! (cid:104) , true ⊥ (cid:105) . s [ ] ! (cid:104) , true ⊥ (cid:105) . Let H = { s : ( , , true (cid:62) ) } = L { s : ε } = H . Then the transition: < P , H > −→ < s [ ] ? ( , y ) . | s [ ] ! (cid:104) , true ⊥ (cid:105) . s [ ] ! (cid:104) , true ⊥ (cid:105) . , { s : ε } > = < P , H (cid:48) > could not be matched by < P , H > . In fact, the first component of P cannot move in H , and eachcomputation of the second component yields an L -observable H (cid:48) such that H (cid:48) (cid:54) = {⊥} H (cid:48) . Moreover, Pcannot stay idle in H , since P is not L -bisimilar to P (as it is easy to see by a similar reasoning).By adding the level ⊥ to the variables x and y, we force the second component to move first in both < P , H > and < P , H > . Interestingly, an insecure component may be “sanitised” by its context, so that the insecurity isnot detectable in the overall process. Clearly, in case of a deadlocking context, the insecurity is maskedsimply because the dangerous part is not executed. However, the curing context could also be a partner ofthe insecure component, as shown by the next example. This example is significant because it constitutesa non trivial case of a process that is secure but not safe , as will be further discussed in Section 6.
Example 4.10 (Insecurity sanitised by parallel context)
Let R be obtained by composing the process P of Example 4.8 in parallel with a dual process P, andconsider again the Q -sets H and H , where H = {⊥} H :R = P | P = s [ ] ? ( , x (cid:62) ) . s [ ] ! (cid:104) , true ⊥ (cid:105) . | s [ ] ! (cid:104) , true (cid:62) (cid:105) . s [ ] ? ( , y ⊥ ) . H = { s : ( , , true (cid:62) ) } H = { s : ε } Then the move < P | P , H > −→ < s [ ] ! (cid:104) , true ⊥ (cid:105) . | P , { s : ε } > can be simulated by the sequence oftwo moves < P | P , H > −→ < P | s [ ] ? ( , y ⊥ ) . , { s : ( , , true (cid:62) ) } > −→ < s [ ] ! (cid:104) , true ⊥ (cid:105) . | s [ ] ? ( , y ⊥ ) . , { s : ε } >, where H (cid:48) = H (cid:48) = { s : ε } .Let us now compare the processes R = s [ ] ! (cid:104) , true ⊥ (cid:105) . | P and R = s [ ] ! (cid:104) , true ⊥ (cid:105) . | s [ ] ? ( , y ⊥ ) . .Let K , K be monotone Q -sets containing a queue s : h and such that K = {⊥} K . Now, if < R , K > moves first, either it does the high output of P, in which case < R , K > replies by staying idle, sincethe resulting processes will be equal and the resulting queues K (cid:48) , K (cid:48) will be such that K (cid:48) = {⊥} K (cid:48) , orit executes its first component, in which case < R , K > does exactly the same, clearly preserving the {⊥} -equality of Q -sets, and it remains to prove that P = s [ ] ! (cid:104) , true (cid:62) (cid:105) . s [ ] ? ( , y ⊥ ) . is ⊥ -bisimilar tos [ ] ? ( , y ⊥ ) . . But this is easy to see since if the first process moves, the second may stay idle, while ifthe second moves, the first may simulate it in two steps. . Capecchi, I Castellani & M. Dezani-Ciancaglini Conversely, if < R , K > moves first, either it executes its second component (if the queue allows it),in which case < R , K > simulates it in two steps, or it executes its first component, in which case weare reduced once again to prove that P = s [ ] ! (cid:104) , true (cid:62) (cid:105) . s [ ] ? ( , y ⊥ ) . is ⊥ -bisimilar to s [ ] ? ( , y ⊥ ) . . In this section we introduce the monitored semantics for our calculus. This semantics is defined on monitored processes M , M (cid:48) , whose syntax is the following, assuming µ ∈ S : M :: = P (cid:101) µ || M | M || ( ν a ) M || def D in M In a monitored process P (cid:101) µ , the level µ that tags P is called the monitoring level for P . It controls theexecution of P by blocking any communication of level (cid:96) (cid:54)≥ µ . Intuitively, P (cid:101) µ represents a partiallyexecuted process, and µ is the join of the levels of received objects (values, labels or channels) and oftested conditions up to this point in execution.The monitored semantics is defined on monitored configurations C = < M , H > . By abuse of nota-tion, we use the same symbol C for standard and monitored configurations.The semantic rules define simultaneously a reduction relation C (cid:40) → C (cid:48) and an error predicate C †on monitored configurations. As usual, the semantic rules are applied modulo a structural equivalence ≡ . The new specific structural rules are: ( P | P ) (cid:101) µ ≡ P (cid:101) µ | P (cid:101) µ C † ∧ C ≡ C (cid:48) = ⇒ C (cid:48) †The reduction rules of the monitored semantics are given in Table 3. Intuitively, the monitoring levelis initially ⊥ and gets increased each time a test of higher or incomparable level or an input of higherlevel is crossed. Moreover, if < P (cid:101) µ , H > attempts to perform a communication action of level (cid:96) (cid:54)≥ µ ,then < P (cid:101) µ , H > †. We say in this case that the reduction produces error.The reason why the monitoring level should take into account the level of inputs is that, as argued inSection 4, the process s [ ] ? ( , x (cid:62) ) . s [ ] ! (cid:104) , true ⊥ (cid:105) . is not secure. Hence it should not be safe either.One may wonder whether monitored processes of the form P (cid:101) µ | P (cid:101) µ , where µ (cid:54) = µ , are reallyneeded. The following example shows that, in the presence of concurrency, a single monitoring level (asused for instance in [4]) would not be enough. Example 5.1 (Need for multiple monitoring levels)
Suppose we could only use a single monitoring level for the parallel process P below, which shouldintuitively be safe. Then a computation of P (cid:101)⊥ would be successful or not depending on the order ofexecution of its parallel components:P = α ! (cid:104) , true ⊥ (cid:105) . P = α ? ( , x ⊥ ) . P = α ! (cid:104) , true (cid:62) (cid:105) . P = α ? ( , y (cid:62) ) . P = ¯ a [ ] | a [ ]( α ) . P | a [ ]( α ) . P | a [ ]( α ) . P | a [ ]( α ) . P Here, if P and P communicate first, we would have the successful computation:P (cid:101)⊥ (cid:40) → ∗ ( ν s ) < ( P { s [ ] / α } | P { s [ ] / α } ) (cid:101)⊥ , s : ε > (cid:40) → ( ν s ) < (cid:101)(cid:62) , s : ε > Instead, if P and P communicate first, then we would run into error:P (cid:101)⊥ (cid:40) → ∗ ( ν s ) < ( P { s [ ] / α } | P { s [ ] / α } ) (cid:101)(cid:62) , s : ε > † Intuitively, the monitoring level resulting from the communication of P and P should not constrainthe communication of P and P , since there is no causal dependency between them. Allowing differentmonitoring levels for different parallel components, when P and P communicate first we get:P (cid:101)⊥ (cid:40) → ∗ ( ν s ) < (cid:101)(cid:62) | ( P { s [ ] / α } | P { s [ ] / α } ) (cid:101)⊥ , s : ε > (cid:40) → ∗ ( ν s ) < (cid:101)(cid:62) | (cid:101)⊥ , s : ε > Information flow safety in multiparty sessions a [ ]( α ) . P (cid:101) µ | ... | a [ n ]( α n ) . P (cid:101) µ n n | ¯ a [ n ] (cid:101) µ n + (cid:40) → ( ν s ) < P { s [ ] / α } (cid:101) µ | ... | P n { s [ n ] / α n } (cid:101) µ , s : ε > where µ = (cid:70) i ∈{ ... n + } µ i [MLink] if µ ≤ (cid:96) then < s [ p ] ! (cid:104) Π , e (cid:105) . P (cid:101) µ , s : h > (cid:40) → < P (cid:101) µ , s : h · ( p , Π , v (cid:96) ) > else < s [ p ] ! (cid:104) Π , e (cid:105) . P (cid:101) µ , s : h > †where e ↓ v (cid:96) [MSendV] if µ ≤ (cid:96) then < s [ q ] ? ( p , x (cid:96) ) . P (cid:101) µ , s : ( p , q , v (cid:96) ) · h > (cid:40) → < P { v / x } (cid:101) (cid:96) , s : h > else < s [ q ] ? ( p , x (cid:96) ) . P (cid:101) µ , s : ( p , q , v (cid:96) ) · h > † [MRecV] if µ ≤ (cid:96) then < s [ p ] ! (cid:96) (cid:104) Π , a (cid:105) . P (cid:101) µ , s : h > (cid:40) → < P (cid:101) µ , s : h · ( p , Π , a (cid:96) ) > else < s [ p ] ! (cid:96) (cid:104) Π , a (cid:105) . P (cid:101) µ , s : h > † [MSendS] if µ ≤ (cid:96) then < s [ q ] ? (cid:96) ( p , ζ ) . P (cid:101) µ , s : ( p , q , a (cid:96) ) · h > (cid:40) → < P { a / ζ } (cid:101) (cid:96) , s : h > else < s [ q ] ? (cid:96) ( p , ζ ) . P (cid:101) µ , s : ( p , q , a (cid:96) ) · h > † [MRecS] if µ ≤ (cid:96) then < s [ p ] ! (cid:96) (cid:104)(cid:104) q , s (cid:48) [ p (cid:48) ] (cid:105)(cid:105) . P (cid:101) µ , s : h > (cid:40) → < P (cid:101) µ , s : h · ( p , q , s (cid:48) [ p (cid:48) ] (cid:96) ) > else < s [ p ] ! (cid:96) (cid:104)(cid:104) q , s (cid:48) [ p (cid:48) ] (cid:105)(cid:105) . P (cid:101) µ , s : h > † [MSendC] if µ ≤ (cid:96) then < s [ q ] ? (cid:96) (( p , α )) . P (cid:101) µ , s : ( p , q , s (cid:48) [ p (cid:48) ] (cid:96) ) · h > (cid:40) → < P { s (cid:48) [ p (cid:48) ] / α } (cid:101) (cid:96) , s : h > else < s [ q ] ? (cid:96) (( p , α )) . P (cid:101) µ , s : ( p , q , s (cid:48) [ p (cid:48) ] (cid:96) ) · h > † [MRecC] if µ ≤ (cid:96) then < s [ p ] ⊕ (cid:96) (cid:104) Π , λ (cid:105) . P (cid:101) µ , s : h > (cid:40) → < P (cid:101) µ , s : h · ( p , Π , λ (cid:96) ) > else < s [ p ] ⊕ (cid:96) (cid:104) Π , λ (cid:105) . P (cid:101) µ , s : h > † [MLabel] if µ ≤ (cid:96) then < s [ q ] & (cid:96) ( p , { λ i : P i } i ∈ I ) (cid:101) µ , s : ( p , q , λ (cid:96) i ) · h > (cid:40) → < P (cid:101) (cid:96) i , s : h > else < s [ q ] & (cid:96) ( p , { λ i : P i } i ∈ I ) (cid:101) µ , s : ( p , q , λ (cid:96) i ) · h > †where i ∈ I [MBranch] if e then P else Q (cid:101) µ (cid:40) → P (cid:101) µ (cid:116) (cid:96) if e ↓ true (cid:96) if e then P else Q (cid:101) µ (cid:40) → Q (cid:101) µ (cid:116) (cid:96) if e ↓ false (cid:96) [MIf-T, MIf-F] ( def X ( x , α ) = P in ( X (cid:104) e , s [ p ] (cid:105) )) (cid:101) µ (cid:40) → def X ( x , α ) = P in ( P { v (cid:96) / x }{ s [ p ] / α } ) (cid:101) µ where e ↓ v (cid:96) [MDef] < M , H > (cid:40) → ( ν ˜ s ) < M (cid:48) , H (cid:48) > ⇒ < def D in ( M | M (cid:48)(cid:48) ) , H > (cid:40) → ( ν ˜ s ) < def D in ( M (cid:48) | M (cid:48)(cid:48) ) , H (cid:48) > [MDefin] C (cid:40) → ( ν ˜ s ) C (cid:48) and ¬ C (cid:48)(cid:48) † ⇒ ( ν ˜ r )( C (cid:107) C (cid:48)(cid:48) ) (cid:40) → ( ν ˜ r )( ν ˜ s )( C (cid:48) (cid:107) C (cid:48)(cid:48) ) C † ⇒ ( ν ˜ r )( C (cid:107) C (cid:48) ) † [MScopC] Table 3: Monitored reduction rules. . Capecchi, I Castellani & M. Dezani-Ciancaglini a [ n ] as well asa complete set of “service callers” a [ p ]( α ) . P p , ≤ p ≤ n , the monitoring level of each of them contributesto the monitoring level of the session. Note that the fact that this monitoring level may be computeddynamically as the join of the monitoring levels of the participants exempts us from statically annotatingservices with levels, as it was necessary to do in [5] in order to type the various participants consistently.Consider the process: s [ ] ? ( , x (cid:62) ) . if x (cid:62) then ¯ b [ ] else | b [ ]( β ) . β ! (cid:104) , true ⊥ (cid:105) . | b [ ]( β ) . β ? ( , y ⊥ ) . Here the monitoring level of the conditional becomes (cid:62) after the test, and thus, assuming the if branch is taken, rule [MLink] will set the monitoring level of the session to (cid:62) . This will block theexchange of the ⊥ -value between the last two components. Example 5.2 (Need for security levels on transmitted service names)
This example shows the need for security levels on service names in rules [MSendS] and [MRecS].s [ ] ? ( , x (cid:62) ) . if x (cid:62) then s [ ] ! (cid:96) (cid:104) , a (cid:105) . else s [ ] ! (cid:96) (cid:104) , b (cid:105) . | s [ ] ? (cid:96) ( , ζ ) . ¯ ζ [ ] | a [ ]( α ) . α ! (cid:104) , true ⊥ (cid:105) . | a [ ]( α ) . α ? ( , y ⊥ ) . | b [ ]( β ) . β ! (cid:104) , false ⊥ (cid:105) . | b [ ]( β ) . β ? ( , y ⊥ ) . This process is insecure because, depending on the high value received for x (cid:62) , it will initiate a session onservice a or on service b, which both perform a low value exchange. If (cid:96) (cid:54) = (cid:62) , the monitored semanticswill yield error in the outputs of the first line, otherwise it yields error in the outputs of the last two lines. Similar examples show the need for security levels on transmitted channels and labels.
We define now the property of safety for monitored processes, from which we derive also a property ofsafety for processes. We then prove that if a process is safe, it is also secure.A monitored process may be “relaxed” to an ordinary process by removing all its monitoring levels.
Definition 6.1 (Demonitoring)
If M is a monitored process, its demonitoring | M | is defined by: | P (cid:101) µ | = P | M | M | = | M | | | M || ( ν a ) M | = ( ν a ) | M | | def D in M | = def D in | M | Intuitively, a monitored process M is safe if it can mimic at each step the transitions of the process | M | . Definition 6.2 (Monitored process safety)
The safety predicate on monitored processes is coinductively defined by:M is safe if for any monotone Q -set H such that < | M | , H > is saturated:If < | M | , H > −→ ( ν ˜ r ) < P , H (cid:48) > then < M , H > (cid:40) → ( ν ˜ r ) < M (cid:48) , H (cid:48) > , where | M (cid:48) | = P and M (cid:48) is safe.
Definition 6.3 (Process safety)
A process P is safe if P (cid:101)⊥ is safe.
We show now that if a process is safe, then none of its monitored computations starting with mon-itor ⊥ gives rise to error. This result rests on the observation that < M , H > (cid:40) → if and only if < | M | , H > −→ and ¬ < M , H > †, and that if M is safe, then if a standard communication rule isapplicable to | M | , the corresponding monitored communication rule is applicable to M .8 Information flow safety in multiparty sessions
Proposition 6.4 (Safety implies absence of run-time errors)
If P is safe, then every monitored computation: < P (cid:101)⊥ , /0 > = < M , H > (cid:40) → ( ν ˜ r ) < M , H > (cid:40) → · · · ( ν ˜ r k ) < M k , H k > is such that ¬ < M k , H k > † . Note that the converse of Proposition 6.4 does not hold, as shown by the next example. This meansthat we could not use absence of run-time errors as a definition of safety, since that would not be strongenough to guarantee our security property, which allows the pair of L -equal Q -sets to be refreshed ateach step (while maintaining L -equality). Example 6.5 P = ¯ a [ ] | a [ ]( α ) . P | a [ ]( α ) . P P = α ! (cid:104) , true (cid:62) (cid:105) . α ? ( , x (cid:62) ) . P = α ? ( , z (cid:62) ) . if z (cid:62) then α ! (cid:104) , false (cid:62) (cid:105) . else α ! (cid:104) , true ⊥ (cid:105) . Note first that this process is not ⊥ -secure because after P has put the value true (cid:62) in the Q -set, this valuemay be changed to false (cid:62) while preserving L -equality of Q -sets, thus allowing the else branch of P tobe explored by the bisimulation. This process is not safe either, because our definition of safety mimics L -bisimulation by refreshing the Q -set at each step. On the other hand, a simple monitored executionof < P (cid:101)⊥ , /0 > , which uses at each step the Q -set produced at the previous step, would never take the else branch and would therefore always succeed. Hence the simple absence of run-time errors wouldnot be sufficient to enforce security. In order to prove that safety implies security, we need some preliminary results.
Lemma 6.6 (Monotonicity of monitoring)
Monitoring levels may only increase along execution: If (cid:104) P (cid:101) µ , H (cid:105) (cid:40) → ( ν ˜ r ) (cid:104) P (cid:48)(cid:101) µ (cid:48) | M , H (cid:48) (cid:105) , then µ ≤ µ (cid:48) . As usual, L -high processes modify Q -sets in a way which is transparent for L -observers. Definition 6.7 ( L -highness of processes) A process P is L -high if for any monotone Q -set H suchthat < P , H > is saturated, it satisfies the property:If < P , H > −→ ( ν ˜ r ) < P (cid:48) , H (cid:48) > , then H = L H (cid:48) and P (cid:48) is L -high. Lemma 6.8
If P (cid:101) µ is safe and µ (cid:54)∈ L , then P is L -high. We next define the bisimulation relation that will be used in the proof of soundness. Roughly, allmonitored processes with a high monitoring level are related, while the other processes are related ifthey are congruent.
Definition 6.9 (Bisimulation for soundness proof: monitored processes)
Given a downward-closed set of security levels L ⊆ S , the relation R L ◦ on monitored processes isdefined inductively as follows:M R L ◦ M if M and M are safe and one of the following holds1. M = P (cid:101) µ , M = P (cid:101) µ and µ , µ (cid:54)∈ L ;2. M = M = P (cid:101) µ and µ ∈ L ;3. M i = ∏ mj = N ( i ) j , where ∀ j ∈ { , . . . , m } , N ( ) j R L ◦ N ( ) j follows from (1) or (2);4. M i = ( ν a ) N i , where N R L ◦ N ;5. M i = def D in N i , where N R L ◦ N . Definition 6.10 (Bisimulation for soundness proof: processes)
Given a downward-closed set of security levels L ⊆ S , the relation R L on processes is defined by:P R L P if there are M , M such that P i ≡ | M i | for i = , and M R L ◦ M . . Capecchi, I Castellani & M. Dezani-Ciancaglini L –security, for any L . The informal argument goes as follows. Let “low” mean “in L ” and “high” mean “not in L ”. If P is not L –secure, this means that there are two different observablelow behaviours after a high input or in the two branches of a high conditional. This implies that there issome observable low action after the high input, or in at least one of the branches of the high conditional.But in this case the monitored semantics will yield error, since it does so as soon as it meets an actionof level (cid:96) (cid:54)≥ µ , where µ is the monitoring level of the executing component (which will have been set tohigh after crossing the high input or the high condition). Theorem 6.11 (Safety implies security)
If P is safe, P is also secure.
The converse of Theorem 6.11 does not hold, as shown by the process R of Example 4.10. A moreclassical example is s [ ] ? ( , x (cid:62) ) . if x (cid:62) then s [ ] ! (cid:104) , true ⊥ (cid:105) . else s [ ] ! (cid:104) , true ⊥ (cid:105) . . There is a wide literature on the use of monitors (frequently in combination with types) for assuringsecurity, but most of this work has focussed so far on sequential computations, see for instance [8, 4,14]. More specifically, [8] considers an automaton-based monitoring mechanism for information flow,combining static and dynamic analyses, for a sequential imperative while-language with outputs. Thepaper [4], which provided the initial inspiration for our work, deals with an ML-like language and uses asingle monitoring level to control sequential executions. The work [1] shows how to enforce information-release policies, which may be viewed as relaxations of noninterference, by a combination of monitoringand static analysis, in a sequential language with dynamic code evaluation. Dynamic security policiesand means for expressing them via security labels have been studied for instance in [12, 16].In session calculi, concurrency is present not only among participants in a given session, but alsoamong different sessions running in parallel and possibly involving some common partners. Hence,different monitoring levels are needed to control different parallel components, and these levels mustbe joined when the components convene to start a new session. As we use a general lattice of securitylevels (rather than a two level lattice as it is often done), it may happen that while all the participantsmonitors are “low”, their join is “high”, constraining all their exchanges in the session to be high too.Furthermore, we deal with structured memories (the Q -sets). In this sense, our setting is slightly morecomplex than some of the previously studied ones. Moreover, a peculiarity of session calculi is that datawith different security levels are transmitted on the same channel (which is also the reason why securitylevels are assigned to data, and not to channels). Hence, although the intuition behind monitors is rathersimple, its application to our calculus is not completely straightforward.Session types have been proposed for a variety of calculi and languages. We refer to [6] for a surveyon the session type literature. However, the integration of security requirements into session calculi isstill at an early stage. A type system assuring that services comply with a given data exchange securitypolicy is presented in [10]. Enforcement of integrity properties in multiparty sessions, using sessiontypes, has been studied in [3, 13]. These papers propose a compiler which, given a multiparty sessiondescription, implements cryptographic protocols that guarantee session execution integrity.We expect that a version of our monitored semantics, enriched with labelled transitions, could turnuseful to the programmer, either to help her localise and repair program insecurities, or to deliberatelyprogram well-controlled security transgressions, according to some dynamically determined condition.To illustrate this point, let us look back at our medical service example of Figure 1 in Section 2. In some Each session channel is used “polymorphically” to send objects of different types and levels, since it is the only means fora participant to communicate with the others in a given session. Information flow safety in multiparty sessions special circumstances, we could wish to allow the user to send her message in clear, for instance in caseof an urgency, when the user cannot afford to wait for data encryption and decryption. Here, if in thecode of U we replaced the test on gooduse ( form (cid:62) ) by a test on no-urgency (cid:62) ∧ gooduse ( form (cid:62) ) , then incase of urgency we would have a security violation, which however should not be considered incorrect,given that it is expected by the programmer. A labelled transition monitored semantics, whose labelswould represent security errors, would then allow the programmer to check that her code’s insecuritiesare exactly the expected ones. This labelled semantics could also be used to control error propagation,thus avoiding to block the execution of the whole process in case of non-critical or limited errors. In thiscase, labels could be recorded in the history of the process and the execution would be allowed to go on,postponing error analysis to natural breaking points (like the end of a session). Acknowledgments
We would like to thank Kohei Honda, Nobuko Yoshida and the anonymous referees for helpful feedback.
References [1] A. Askarov & A. Sabelfeld (2009):
Tight Enforcement of Information-Release Policies for Dynamic Lan-guages . In:
Proc. CSF’09 , IEEE Computer Society, pp. 43–59.[2] Adam Barth, John Mitchell, Anupam Datta & Sharada Sundaram (2007):
Privacy and Utility in BusinessProcesses . In:
Proc. CSF’07 , IEEE Computer Society, pp. 279–294.[3] K. Bhargavan, R. Corin, P. M. Deniélou, C. Fournet & J. J. Leifer (2009):
Cryptographic Protocol Synthesisand Verification for Multiparty Sessions . In:
Proc. CSF’09 , IEEE Computer Society, pp. 124–140.[4] G. Boudol (2009):
Secure Information Flow as a Safety Property . In:
Proc. FAST’08 , LNCS
Session Types for Access and Informa-tion Flow Control . In:
Proc. CONCUR’10 , LNCS
Sessions and Session Types: an Overview . In:
Proc. WS-FM’09 , LNCS
Information Flow Security in Dynamic Contexts . In:
Proc. CSFW’02) , IEEEComputer Society Press, pp. 307–319.[8] G. Le Guernic, A. Banerjee, T. Jensen & D. A. Schmidt (2007):
Automata-based Confidentiality Monitoring .In:
Proc. ASIAN’06 , LNCS
Multiparty Asynchronous Session Types . In:
Proc. POPL’08 ,ACM Press, pp. 273–284, doi:10.1145/1328438.1328472.[10] A. Lapadula, R. Pugliese & F. Tiezzi (2007):
Regulating Data Exchange in Service Oriented Applications .In:
Proc. FSEN’07 , LNCS
Communicating and Mobile Systems: the Pi-Calculus . CUP.[12] A. C. Myers & B. Liskov (2000):
Protecting Privacy using the Decentralized Label Model . ACM Transac-tions on Software Engineering and Methodology
9, pp. 410–442, doi:10.1145/363516.363526.[13] J. Planul, R. Corin & C. Fournet (2009):
Secure Enforcement for Global Process Specifications . In:
Proc.CONCUR’09 , LNCS
From Dynamic to Static and Back: Riding the Roller Coaster ofInformation-flow Control Research . In:
Proc. PSI’06 , LNCS
An Interaction-based Language and its Typing System . In:
Proc.PARLE’94 , LNCS