Embedding a Deterministic BFT Protocol in a Block DAG
aa r X i v : . [ c s . D C ] F e b Embedding a Deterministic BFT Protocol in a Block DAG
MARIA A SCHETT,
University College London, United Kingdom
GEORGE DANEZIS,
University College London, United Kingdom
This work formalizes the structure and protocols underlying recent distributed systems leveraging block DAGs, which are essentiallyencoding Lamport’s happened-before relations between blocks, as their core network primitives. We then present an embedding ofany deterministic Byzantine fault tolerant protocol P to employ a block DAG for interpreting interactions between servers. Ourmain theorem proves that this embedding maintains all safety and liveness properties of P . Technically, our theorem is based on theinsight that a block DAG merely acts as an efficient reliable point-to-point channel between instances of P while also using P forefficient message compression. Recent interest in blockchain and cryptocurrencies has resulted in a renewed interest in Byzantine fault tolerant con-sensus for state machine replication, as well as Byzantine consistent and reliable broadcast that is sufficient to buildpayment systems [2, 12]. A number of designs [21] for such mechanisms depart from the traditional setting of par-ticipants directly sending protocol messages to each other, and rely instead in a common higher level abstractionwhere participants exchange blocks of transactions, linking cryptographically to past blocks—generalizing the idea ofa blockchain to a more generic directed acyclic graph embodying Lamport’s happened-before relations [16] betweenblocks, which we refer to as a block DAG .Examples of such designs are
Hashgraph [1] used by the Hedera network, as well as
Aleph [11],
Blockmania [7],and
Flare [20]. These works argue a number of advantages for the block DAG approach. First, maintaining a jointblock DAG is simple and scalable, and can leverage widely-available distributed key-value stores. Second, they reportimpressive performance results compared with traditional protocols that materialize point-to-point messages as directnetwork messages. This results from batching many transactions in each block; using a low number of cryptographicsignatures, having minimal overhead when running deterministic parts of the protocol; using a common block DAGlogic while performing network IO, and only applying the higher-level protocol logic off-line possibly later; and as aresult supporting running many instances of protocols in parallel ‘for free’. We take these claimed performance andimplementation simplicity advantages as a given and do not examine them further.We note, however, that while the protocols may be simple and performant when implemented, their specification,and arguments for correctness, safety and liveness are far from simple. Their proofs and arguments are usually inher-ently tied to their specific applications and requirements, but both specification and formal arguments of
Hashgraph , Aleph , Blockmania , and
Flare are structured around two phases: (i) building a block DAG, and (ii) running a protocolon top of the block DAG. We generalize their arguments by giving an abstraction of a block DAG as a reliable point-to-point link . We can then rely on this abstraction to simulate a protocol P —as a black-box—on top of this point-to-pointlink maintaining the safety and liveness properties of P . We hope that this modular formulation of the underlyingmechanisms through a clear separation from the high-level protocol P and the underlying block DAG allows for easyre-usability and strengthens the foundations and persuasiveness of systems based on block DAGs.In this work we present a formalization of a block DAG, the protocols to maintain a joint block DAG, and itsproperties. We show that any deterministic Byzantine fault tolerant (BFT) protocol, can be embedded in this blockDAG, while maintaining its safety and liveness properties. We demonstrate that the claimed advantageous properties aria A Schett and George Danezis shim (P) gossip (G) interpret (G , P)P ( ℓ ) .𝑟 network blocks Ms P [ out , ℓ ] Ms P [ in , ℓ ] request ( ℓ ∈ L , 𝑟 ∈ Rqsts P ) indicate ( ℓ ∈ L , 𝑖 ∈ Inds P )( ℓ, 𝑖 )( ℓ, 𝑟 ) user of PG 𝑠 𝑖 ∈ SrvrsFig. 1. Components and interfaces. of block DAG based protocols, such as the efficient message compression, batching of signatures, the ability to runmultiple instances‘for free’, and off-line interpretation of the block DAG, emerge from the generic composition wepresent. Therefore, the proposed composition not only allows for straight forward correctness arguments, but alsopreserves the claimed advantages of using a block DAG approach, making it a useful abstraction not only to analyzebut also implement systems that offer both high assurance and high performance.
Overview.
Figure 1 shows the interfaces and components of our proposed block DAG framework parametric by a de-terministic BFT protocol P . At the top, we have a user seeking to run one or multiple instances of of P on servers Srvrs .First, to distinguish between multiple protocol instances the user assigns a label ℓ from a set of labels L . Now, for P there is a set of possible requests Rqsts P . But instead of requesting 𝑟 ∈ Rqsts P from 𝑠 𝑖 ∈ Srvrs running P for protocolinstance ℓ , the user calls the high-level interface of our block DAG framework: request ( ℓ, 𝑟 ) in shim (P) . Internally, 𝑠 𝑖 passes ( ℓ, 𝑟 ) on to gossip (G) —which continuously builds 𝑠 𝑖 ’s block DAG G by receiving and disseminating blocks . Thepassed ( ℓ, 𝑟 ) is included into the next block 𝑠 𝑖 disseminates, and 𝑠 𝑖 also includes references to other received blocks,where cryptographic primitives prevent byzantine servers from adding cycles between blocks [17]. These blocks arecontinuously exchanged by the servers utilizing the low-level interface to the network to exchange blocks. In Section 3we formally define the block DAG, its properties and protocols for servers to maintain a joint block DAG. Indepen-dently, as indicated by the dotted lines, 𝑠 𝑖 interprets P by reading G and running interpret (G , P) . To do so, 𝑠 𝑖 locallysimulates every protocol instance P with label ℓ by simulating one process instance of P ( ℓ ) for every server 𝑠 ∈ Srvrs .To drive the simulation, 𝑠 𝑖 passes the request 𝑟 read from a block in G to P , and then 𝑠 𝑖 simulates the message exchangebetween any two servers based on the structure of the block DAG and the deterministic protocol P . Therefore 𝑠 𝑖 movesmessages between in- and out-buffers Ms P [ in , ℓ ] and Ms P [ out , ℓ ] . Eventually, the simulation P ( ℓ ) of the server 𝑠 𝑖 will indicate 𝑖 from the set of possible indications Inds P . We show how the block DAG essentially acts as a reliablepoint-to-point link and describe how any BFT protocol P can be interpreted on a block DAG in Section 4. Finally, after interpret indicated 𝑖 , shim (P) can indicate 𝑖 for ℓ to the user of P . From the user’s perspective, the embedding of P acted as P , i.e. shim (P) maintained P ’s interfaces and properties . We prove this in Section 5 and illustrate the blockDAG framework for P instantiated with byzantine reliable broadcast protocol. We give related work in Section 6, and mbedding a Deterministic BFT Protocol in a Block DAG conclude in Section 7, where we discuss integration aspects of higher-level protocols and the block DAG framework—including challenges in embedding protocols with non-determinism, more advanced cryptography, and BFT protocolsoperating under partial synchrony. Contribution.
We show that using the block DAG framework of Figure 1 for a deterministic BFT protocol P main-tains the (i) interfaces, and (ii) safety and liveness properties of P (Theorem 5.1). The argument is generic: interpretingthe eventually joint block DAG implements a reliable point-to-point link (Lemma 3.7, Lemma 4.3). Using this reliablepoint-to-point link any server can locally run a simulation of P as a black-box. This simulation is an execution of P and thus retains the properties of P . By using the block DAG framework, the user gains efficient message compressionand runs many instances of P in parallel ’for free’. System Model.
We assume a finite set of servers
Srvrs . A correct server 𝑠 ∈ Srvrs faithfully follows a protocol P .When 𝑠 is byzantine , then 𝑠 behaves arbitrarily. However, we assume byzantine servers are computationally bound ( e.g. 𝑠 cannot forge signatures, or find collisions in cryptographic hash functions) and cannot interfere with the TrustedComputing Base of correct servers ( e.g. kill the electricity supply of correct servers). The set Srvrs is fixed and knownby every 𝑠 ′ ∈ Srvrs and we assume 𝑓 + servers to tolerate at most 𝑓 byzantine servers. The set of all messages inprotocol P is M P . Every message 𝑚 ∈ M P has a 𝑚. sender and a 𝑚. receiver . We assume an arbitrary, but fixed, totalorder on messages: < M . A protocol P is deterministic if a state 𝑞 and a sequence of messages 𝑚 ∈ M P determinestate 𝑞 ′ and out-going messages 𝑀 ⊆ M P . In particular, deterministic protocols do not rely on random behavior, suchas coin-flips. The exact requirements on the network synchronicity depend on the protocol P , that we want to embed, e.g. we may require partial synchrony [8] to avoid FLP [9]. The only network assumption we impose for building blockDAGs is the following: Assumption 1 (Reliable Delivery).
For two correct servers 𝑠 and 𝑠 , if 𝑠 sends a block 𝐵 to 𝑠 , then eventually 𝑠 receives 𝐵 .Cryptographic Primitives. We assume a secure cryptographic hash function 𝐴 → 𝐴 ′ and write ( 𝑥 ) for the hash of 𝑥 ∈ 𝐴 , and ( 𝐴 ) for 𝐴 ′ ( cf. Definition A.1). We further assume a secure cryptographic signature scheme [14]: given a setof signatures Σ we have functions sign : Srvrs × M → Σ and verify 𝜎 : Srvrs × M × Σ → B , where verify 𝜎 ( 𝑠,𝑚, 𝜎 ) = true iff sign ( 𝑠,𝑚 ) = 𝜎 . Given computational bounds on all participants appropriate parameters for these schemes can bechosen to make their probability of failure negligible, and for the remainder of this work we assume their probabilityof failure to be zero. Directed Acyclic Graphs. A directed graph G is a pair of vertices V and edges E ⊆ V × V . We write ∅ for the emptygraph . If there is an edge from 𝑣 to 𝑣 ′ , that is ( 𝑣, 𝑣 ′ ) ∈ E , we write 𝑣 ⇀ 𝑣 ′ . If 𝑣 ′ is reachable from 𝑣 , then ( 𝑣, 𝑣 ′ ) is inthe transitive closure of ⇀ , and we write ⇀ + . We write ⇀ ∗ for the reflexive and transitive closure, and 𝑣 ⇀ 𝑛 𝑣 ′ for 𝑛 > if 𝑣 ′ is reachable from 𝑣 in 𝑛 steps. A graph G is acyclic , if 𝑣 ⇀ + 𝑣 ′ implies 𝑣 ≠ 𝑣 ′ for all nodes 𝑣, 𝑣 ′ ∈ G . Weabbreviate 𝑣 ∈ G if 𝑣 ∈ V G , and 𝑉 ⊆ G if 𝑣 ∈ G for all 𝑣 ∈ 𝑉 . Let G and G be directed graphs. We define G ∪ G as ( V G ∪ V G , E G ∪ E G ) , and G G holds if V G ⊆ V G and E G = E G ∩ ( V G × V G ) . Note, for we not onlyrequire E G ⊆ E G , but additionally E G must already contain all edges from E G between vertices in G . The followingDefinition to insert a new vertex 𝑣 is restrictive: it permits to extend G only by a vertex 𝑣 and edges to this 𝑣 . aria A Schett and George Danezis Definition 2.1.
Let G be a directed graph, 𝑣 be a vertex, and a 𝐸 be set of edges of the shape {( 𝑣 𝑖 , 𝑣 ) | 𝑣 𝑖 ∈ 𝑉 ⊆ G} .We define insert (G , 𝑣, 𝐸 ) = ( V G ∪ { 𝑣 } , E G ∪ 𝐸 ) . Lemma 2.2.
For a directed graph G , a vertex 𝑣 , and a set of edges 𝐸 = {( 𝑣 𝑖 , 𝑣 ) | 𝑣 𝑖 ∈ 𝑉 ⊆ G} , the following propertiesof insert (G , 𝑣, 𝐸 ) hold:(1) if 𝑣 ∈ G and 𝐸 ⊆ E G , then insert (G , 𝑣, 𝐸 ) = G ;(2) if 𝐸 = {( 𝑣 𝑖 , 𝑣 ) | 𝑣 𝑖 ∈ 𝑉 ⊆ G} and 𝑣 ∉ G , then G insert (G , 𝑣, 𝐸 ) ; and(3) if G is acyclic, 𝑣 ∉ G , then insert (G , 𝑣, 𝐸 ) is acyclic. To give some intuitions, for Lemma 2.2 (2), if 𝑣 ∈ G and G ′ = insert (G , 𝑣, 𝐸 ) , then E G ′ ∩ ( V G × V G ) = E G may nothold. For example, let G have vertices 𝑣 and 𝑣 with E G = ∅ , and G ′ = insert (G , 𝑣 , {( 𝑣 , 𝑣 )}) with E G ′ = {( 𝑣 , 𝑣 )} .Now E G ≠ E G ′ ∩ ( V G × V G ) . For Lemma 2.2 (3), if 𝑣 ∈ G , then insert (G , 𝑣, 𝐸 ) may add a cycle. For example, take G with vertices { 𝑣 , 𝑣 } and E G = {( 𝑣 , 𝑣 )} then insert (G , 𝑣 , {( 𝑣 , 𝑣 )}) contains a cycle. The networking component of the block DAG protocol between servers is defined by gossip in Algorithm 1, and isexecuted by all correct servers. This protocol is very simple: it has one core message type, namely a block , which isconstantly disseminated. It contains simple meta-data, a signature, authentication for references to previous blocks,and requests associated to instances of protocol P . Servers only exchange and validate blocks . Now, although serversbuild their block DAGs locally, eventually correct servers have a joint block DAG G . The servers can then independently interpret G as multiple instances of P as defined in Algorithm 2 in Section 4. Definition 3.1. A block 𝐵 ∈ Blks has (i) an identifier n of the server 𝑠 which built 𝐵 , (ii) a sequence number k ∈ N , (iii) afinite list of hashes of predecessor blocks preds = [ ref ( 𝐵 ) , . . . , ref ( 𝐵 𝑘 )] , (iv) a finite list of labels and requests rs ∈ L× Rqsts ,and (v) a signature 𝜎 = sign ( n , ref ( 𝐵 )) . Here, ref is a secure cryptographic hash function computed from n , k , preds ,and rs , but not 𝜎 . By not including 𝜎 , sign ( 𝐵. n , ref ( 𝐵 )) is well defined.We use 𝐵 and ref ( 𝐵 ) interchangeably, which is justified by collision resistance of ref ( cf. Definition A.1(3)). We useregister notation, e.g. 𝐵. n or 𝐵.𝜎 , to refer to elements of a block 𝐵 , and abbreviate 𝐵 ′ ∈ { 𝐵 ′ | ref ( 𝐵 ′ ) ∈ 𝐵. preds } with 𝐵 ′ ∈ 𝐵. preds . Given blocks 𝐵 and 𝐵 ′ with 𝐵. n = 𝐵 ′ . n and 𝐵 ′ . k = 𝐵. k + . If 𝐵 ∈ 𝐵 ′ . preds then we call 𝐵 a parent of 𝐵 ′ and write 𝐵 ′ . parent = 𝐵 . We call 𝐵 a genesis block if 𝐵. k = . A genesis block 𝐵 cannot have a parent block, because 𝐵. k = and is minimal in N . Lemma 3.2.
For blocks 𝐵 and 𝐵 , if 𝐵 ∈ 𝐵 . preds then 𝐵 ∉ 𝐵 . preds . Lemma 3.2 prevents a byzantine server ˇ 𝑠 to include a cyclic reference between ˇ 𝐵 and 𝐵 by (1) waiting for—or buildingitself—a block 𝐵 with ref ( ˇ 𝐵 ) ∈ 𝐵. preds , and then (2) building a block ˇ 𝐵 such that ref ( ˇ 𝐵 ) ∈ 𝐵 . As with secure time-lines [17], Lemma 3.2 gives a temporal ordering on 𝐵 and ˇ 𝐵 . This is a static, cryptographic property, based on thesecurity of hash functions, and not dependent e.g. on the order in which blocks are received on a network. Whilethis prevents byzantine servers from introducing cycles, they can still build “faulty” blocks, and hence we impose thefollowing validity conditions: Definition 3.3.
A server 𝑠 considers a block 𝐵 valid , written valid ( 𝑠, 𝐵 ) , if (i) 𝑠 confirms verify 𝜎 ( 𝐵. n , 𝐵.𝜎 ) , i.e. that 𝐵. n built 𝐵 , (ii) either (a) 𝐵 is a genesis block, or (b) 𝐵 has exactly one parent, and (iii) 𝑠 considers all blocks 𝐵 ′ ∈ 𝐵. preds valid. mbedding a Deterministic BFT Protocol in a Block DAG 𝑠 𝑠 𝐵 𝐵 𝐵 Fig. 2. A block DAG with 3 blocks 𝐵 , 𝐵 , and 𝐵 . ˇ 𝑠 𝑠 𝐵 𝐵 𝐵 𝐵 Fig. 3. A block DAG, where ˇ 𝑠 is equivocating on the blocks 𝐵 and 𝐵 with sequence number 1. Especially, (ii) deserves our attention: a server ˇ 𝑠 may still build two different blocks having the same parent. Forexample, all blocks in Figure 3 are valid. However, ˇ 𝑠 will not be able to create a further block to ’join’ these two blockswith a different parent—their successor will remain split. Essentially, this forces a linear history from every block.We assume, that if a correct server considers a block 𝐵 valid, then 𝑠 can forward any block 𝐵 ′ ∈ 𝐵. preds . That is, 𝑠 has received the full content of 𝐵 ′ —not only ref ( 𝐵 ′ ) —and persistently stores 𝐵 ′ . As there are no cyclic referencesin blocks, the least and greatest fix-point of valid coincide. From valid blocks and their predecessors, a correct serverbuilds a block DAG : Definition 3.4.
For a server 𝑠 , a block DAG G ∈
Dags is a directed acyclic graph with vertices V G ⊆ Blks , where (i) valid ( 𝑠, 𝐵 ) holds for all 𝐵 ∈ V G , and (ii) if 𝐵 ∈ 𝐵 ′ . preds then 𝐵 ∈ V G and ( 𝐵, 𝐵 ′ ) ∈ E G holds for all 𝐵 ′ ∈ V G . Let 𝐵 ′ be a block such that valid ( 𝑠, 𝐵 ′ ) holds and 𝐵 ∈ G for all 𝐵 ∈ 𝐵 ′ . preds . Then 𝑠 inserts 𝐵 ′ in G by insert (G , 𝐵 ′ , {( 𝐵, 𝐵 ′ ) | 𝐵 ∈ 𝐵 ′ . preds }) after Definition 2.1 and we write G . insert ( 𝐵 ) . The preconditions guarantee that G . insert ( 𝐵 ′ ) is a blockDAG (Lemma A.3). Example 3.5.
In Figure 2 we show a block DAG with three blocks 𝐵 , 𝐵 , and 𝐵 , where 𝐵 = { n = 𝑠 , k = , preds = [ ]} , 𝐵 = { n = 𝑠 , k = , preds = [ ]} , and 𝐵 = { n = 𝑠 , k = , preds = [ ref ( 𝐵 ) , ref ( 𝐵 )]} . Here, parent ( 𝐵 ) = 𝐵 .Consider now Figure 3 adding the block: 𝐵 = { n = 𝑠 , k = , preds = [ ref ( 𝐵 ) , ref ( 𝐵 )]} . With block 𝐵 , ˇ 𝑠 is equivocating on the block 𝐵 —and vice versa.Algorithm 1 shows how a server 𝑠 builds (i) its block DAG G in lines 4–13, and (ii) its current block B by includingrequests and references to other blocks in lines 14–18. The servers communicate by exchanging blocks. Assumption 1guarantees, that a correct 𝑠 will eventually receive a block from another correct server. Moreover, every correct server 𝑠 will regularly request disseminate () in line 14 and will eventually send their own block B in line 17 of Algorithm 1guaranteed by the high-level protocol ( cf. Section 5).Every server 𝑠 operates on four data structures. For one, the data structures shared with interpret in Algorithm 2given as arguments in line 1: (i) the block DAG G , which interpret will only read, and (ii) a buffer rqsts , where interpret inserts pairs of labels and requests. On the other hand, 𝑠 also keeps (iii) the block B which 𝑠 currently builds (line 2),and (iv) a buffer blks of received blocks (line 3). To build its block DAG, 𝑠 inserts blocks into G in line 7 and line 16.Lemma A.5 guarantees that by inserting those blocks G remains a block DAG. To insert a block, 𝑠 keeps track of itsreceived blocks as candidate blocks in the buffer blks (line 4–5). Whenever 𝑠 considers a 𝐵 ′ ∈ blks valid (line 6), 𝑠 inserts 𝐵 ′ in G (line 7). However, to consider a block 𝐵 ′ valid, 𝑠 has to consider all its predecessors valid—and 𝑠 maynot have yet received every 𝐵 ∈ 𝐵 ′ . preds . That is, 𝐵 ′ ∈ blks but 𝐵 ∉ blks and 𝐵 ∉ G ( cmp. line 10). Now, 𝑠 canrequest forwarding of 𝐵 from the server that built 𝐵 ′ , i.e. from 𝑠 ′ where 𝐵 ′ . n = 𝑠 ′ , by sending FWD 𝐵 to 𝑠 ′ (lines 10–11).To prevent 𝑠 from flooding 𝑠 ′ an implementation would guard lines 10–11, e.g. by a timer Δ 𝐵 ′ . On the other hand, 𝑠 aria A Schett and George Danezis module gossip ( 𝑠 ∈ Srvrs , G ∈
Dags , rqsts ∈ L× Rqsts ) B ≔ { n : 𝑠, k : 0 , preds : [ ] , rs : [ ] , 𝜎 : null } ∈ Blks blks ≔ ∅ ∈ Blks when received 𝐵 ∈ Blks and 𝐵 ∉ G blks ≔ blks ∪ { 𝐵 } when valid ( 𝑠, 𝐵 ′ ) for some 𝐵 ′ ∈ blks G . insert ( 𝐵 ′ ) B . preds ≔ B . preds · [ ref ( 𝐵 ′ )] blks ≔ blks \ { 𝐵 ′ } when 𝐵 ′ ∈ blks and 𝐵 ∈ 𝐵 ′ . preds where 𝐵 ∉ blks and 𝐵 ∉ G send FWD ref ( 𝐵 ) to 𝐵 ′ . n when received FWD ref ( 𝐵 ) from 𝑠 ′ and 𝐵 ∈ G send 𝐵 to 𝑠 ′ when disseminate () B ≔ {B with rs : rqsts . get () , 𝜎 : sign ( 𝑠, B)} G . insert (B) send B to every 𝑠 ′ ∈ Srvrs B ≔ { n : 𝑠, k : B . k + , preds : [ ref (B)] , rs : [ ] , 𝜎 : null } Algorithm 1:
Building the block DAG G and block B .also answers to forwarding requests for a block 𝐵 from 𝑠 ′ , where 𝐵 ∈ 𝐵 ′ . preds of some block 𝐵 ′ disseminated by 𝑠 (lines 12–13). It is not necessary to request forwarding from servers other than 𝑠 ′ . We only require that correct serverswill eventually share the same blocks. This mechanism, together with the Assumption 1 and 𝑠 ’s eventual disseminationof B , allows us to establish the following lemma: Lemma 3.6.
For a correct server 𝑠 executing gossip , if 𝑠 receives a block 𝐵 , which 𝑠 considers valid, then (1) every correctserver will eventually receive 𝐵 , and (2) every correct server will eventually consider 𝐵 valid. In parallel to building G , 𝑠 builds its current block B by (i) continuously adding a reference to any block 𝐵 ′ , which 𝑠 receives and considers valid in line 8 (adding at most one reference to 𝐵 ′ , cf. Lemma A.6), and (ii) eventually sending B to every server in line 17. Just before 𝑠 sends B , 𝑠 injects literal inscriptions of ( ℓ 𝑖 , 𝑟 𝑖 ) ∈ rqsts into B in line 15. Now rs holds requests 𝑟 𝑖 for the protocol instances P with label ℓ 𝑖 . These requests will eventually be read by interpret in Algorithm 2. Finally, 𝑠 signs B in line 15, sends B to every server, and starts building its next B in line 18 byincrementing the sequence number k , initializing preds with the parent block, as well as clearing rs and 𝜎 .So far we established, how 𝑠 builds its own block DAG. Next we want to establish the concept of a joint block DAG between two correct servers 𝑠 and 𝑠 ′ . Let G 𝑠 and G 𝑠 ′ be the block DAG of 𝑠 and 𝑠 ′ . We define their joint block DAG G ′ as a block DAG G ′ > G 𝑠 ∪ G 𝑠 ′ . This joint block DAG is a block DAG for 𝑠 and for 𝑠 ′ (Lemma A.7). Intuitively, we wantany two correct servers to be able to ’gossip some more’ and arrive at their joint block DAG G ′ . Lemma 3.7.
Let 𝑠 and 𝑠 ′ be correct servers with block DAGs G 𝑠 and G 𝑠 ′ . By executing gossip in Algorithm 1, eventually 𝑠 has a block DAG G ′ 𝑠 such that G ′ 𝑠 > G 𝑠 ∪ G 𝑠 ′ . Proof.
By Lemma A.5 any block DAG G ′ obtained through gossip is a block DAG, and by Lemma A.7 G ′ is a blockDAG for 𝑠 . It remains to show that by executing gossip , eventually G ′ will be the block DAG for 𝑠 . As 𝑠 ′ received mbedding a Deterministic BFT Protocol in a Block DAG and considers all 𝐵 ∈ G 𝑠 ′ valid, by Lemma 3.6 (2) 𝑠 will eventually consider every 𝐵 valid. By executing gossip , 𝑠 willeventually insert every 𝐵 in its block DAG and G ′ will contain all 𝐵 ∈ G 𝑠 ′ . (cid:3) In the next section, we will show how 𝑠 and 𝑠 ′ can independently interpret a deterministic protocol P on this jointblock DAG. But before we do so, we want to highlight that the gossip protocol retains the key benefits reported byworks using the block DAG approach, namely simplicity and amenability to high-performance implementation. Now,our gossip protocol in Algorithm 1 uses an explicit forwarding mechanism in lines 10–13. This explicit forwardingmechanism—as opposed to every correct server re-transmitting every received and valid block in a second communi-cation round—is possible through blocks including references to predecessor blocks. Hence, every server knows whatblocks it is missing and whom to ask for them. But in an implementation, we would go a step further and replace theforwarding mechanism—and messages—as described next.Each block is associated with a unique cryptographic reference that authenticates its content. As a result both best-effort broadcast operations as well as synchronization operations can be implemented using distributed and scalablekey-value stores at each server ( e.g. Appache Cassandra , Aws S3 ), which through sharding have no limits on theirthroughput. Best-effort broadcasts can be implemented directly, through simple asynchronous IO. This is due to thethe (now) single type of message, namely blocks, and a single handler for blocks in gossip that performs minimal work:it just records blocks, and then asynchronously ensures their predecessors exist (potentially doing a remote key-valueread) and they are valid (which only involves reference lookups into a hash-table and a single signature verification).Alternatively, best-effort broadcast itself can be implemented using a publish-subscribe notification system and remotereads into distributed key value stores. In summary, the simplicity and regularity of gossip , and the weak assumptionsand light processing allow systems’ engineers great freedom to optimize and distribute a server’s operations. Both
Hashgraph and
Blockmania (which have seen commercial implementation) report throughputs of many 100,000transactions per second, and latency in the order of seconds. As we will see in the next section no matter which P theservers 𝑠 and 𝑠 ′ choose to interpret, they can build a joint block DAG using the same gossip logic—by only exchangingblocks—and then independently interpret P on G . To interpret the protocol P embedded in a block DAG G a server 𝑠 runs the interpret protocol defined in Algorithm 2.Running interpret is decoupled from running gossip in Algorithm 1 and building the block DAG. To interpret a protocolinstance of P with label ℓ , 𝑠 runs locally one process instance of P with label ℓ for each 𝑠 𝑖 ∈ Srvrs . Now, 𝑠 treats P as a black-box which (i) takes a request or a message, and (ii) returns messages or an indication. These requestsand messages are embedded in the block DAG G as (i) requests 𝐵. rs embedded in block 𝐵 ∈ G , or as (ii) edges byinterpreting 𝐵 ⇀ 𝐵 as messages sent from 𝐵 . n to 𝐵 . n . A server 𝑠 can fully simulate the protocol instance P forany other server. User requests 𝑟 𝑗 to P are embedded into blocks, and 𝑠 reads theses requests from the block andpasses them on to the simulation of P . Since P is deterministic, 𝑠 can—after the initial request 𝑟 𝑗 for P —compute allsubsequent messages which would have been sent in P . There is no need for explicitly sending these messages. Andindeed, we show that the interpretation of a deterministic protocol P embedded in a block DAG implements a reliablepoint-to-point link.To treat P as a black-box, we assume the following high-level interface: (i) an interface to request 𝑟 ∈ Rqsts P ,and (ii) an interface where P indicates 𝑖 ∈ Inds P . When a request 𝑟 reaches a process instance, we assume that itimmediately returns messages 𝑚 , . . . ,𝑚 𝑘 triggered by 𝑟 . This is justified, as 𝑠 runs all process instances locally. As aria A Schett and George Danezis module interpret (G ∈ Dags , P ∈ module ) I [ 𝐵 ∈ Blks ] ≔ false ∈ B when 𝐵 ∈ G where eligible ( 𝐵 ) 𝐵. PIs ≔ copy 𝐵. parent . PIs for every ( ℓ 𝑗 ∈ L , 𝑟 𝑗 ∈ Rqsts ) ∈ 𝐵. rs 𝐵. Ms [ out , ℓ 𝑗 ] ≔ 𝐵. PIs [ ℓ 𝑗 ] .𝑟 𝑗 for every ℓ 𝑗 ∈ { ℓ 𝑗 | ( ℓ 𝑗 , 𝑟 𝑗 ) ∈ 𝐵 𝑗 . rs ∧ 𝐵 𝑗 ∈ G ∧ 𝐵 𝑗 ⇀ + 𝐵 } for every 𝐵 𝑖 ∈ 𝐵. preds 𝐵. Ms [ in , ℓ 𝑗 ] ≔ 𝐵. Ms [ in , ℓ 𝑗 ] ∪ { 𝑚 | 𝑚 ∈ 𝐵 𝑖 . Ms [ out , ℓ 𝑗 ] and 𝑚. receiver = 𝐵. n } for every 𝑚 ∈ 𝐵. Ms [ in , ℓ 𝑗 ] ordered by < M 𝐵. Ms [ out , ℓ 𝑗 ] ≔ 𝐵. Ms [ out , ℓ 𝑗 ] ∪ 𝐵. PIs [ ℓ 𝑗 ] . receive ( 𝑚 ) I [ 𝐵 ] = true when 𝐵. PIs [ ℓ 𝑗 ] .𝑖 indicate ( ℓ 𝑗 , 𝑖, 𝐵. n ) Algorithm 2:
Interpreting protocol P on the block DAG G .requests do not depend on the state of the process instance, also these messages do not depend on the current state ofprocess instance. Then we assume a low-level interface for P to receive a message 𝑚 . Again, we assume that when 𝑚 reaches a process instance, it immediately returns the messages 𝑚 , . . . ,𝑚 𝑘 triggered by 𝑚 .Algorithm 2 shows the protocol executed by 𝑠 for interpreting a deterministic protocol P on a block DAG G . There-fore 𝑠 traverses through every 𝐵 ∈ G . Through the state of I in line 2, 𝑠 keeps track of which blocks in G it has alreadyinterpreted. Hereby edges in G impose a partial order: 𝑠 considers a block 𝐵 ∈ G as eligible ( 𝐵 ) for interpretation if (i) I [ 𝐵 ] = false , and (ii) for every 𝐵 𝑖 ∈ 𝐵. preds , I[ 𝐵 𝑖 ] = true holds. While there may be more than one 𝐵 eligible, every 𝐵 ∈ G is interpreted eventually (Lemma A.10). For now, let 𝑠 pick an eligible 𝐵 in line 3 and interpret 𝐵 in line 4–12. Tointerpret 𝐵 , 𝑠 needs to keeps track of two variables for every protocol instance ℓ 𝑗 : (1) PIs [ ℓ 𝑗 ] , which holds the state ofthe process instance ℓ 𝑗 for a server 𝑠 𝑖 ∈ Srvrs , and (2) Ms [ in , ℓ 𝑗 ] and Ms [ out , ℓ 𝑗 ] , which hold the state of in -going and out -going messages.Our goal is to track changes to these two variables—the process instances PIs and message buffers Ms —throughoutthe interpretation of G . To do so, we assign their state to every block 𝐵 . That is, after interpreting 𝐵 , (1) 𝐵. PIs [ ℓ 𝑗 ] shouldhold the state of the process instance ℓ 𝑗 of the server 𝑠 𝑖 , which built 𝐵 , i.e. , 𝑠 𝑖 = 𝐵. n , and (2) 𝐵. Ms [ in , ℓ 𝑗 ] should hold thein-going messages for 𝑠 𝑖 and Ms [ out , ℓ 𝑗 ] the out-going messages from 𝑠 𝑖 for process instance ℓ 𝑗 . We assume 𝐵. PIs [ ℓ 𝑗 ] to be initialized with ⊥ , and 𝐵. Ms [ 𝑑 ∈ [{ in , out } , ℓ 𝑗 ] with ∅ , and they remain so while 𝐵 is eligible . ( cf. Lemma A.15).As a starting point for computing the state of 𝐵. PIs [ ℓ 𝑗 ] , 𝑠 copies the state from the parent block of 𝐵 in line 4. Forthe base case, i.e. all (genesis) blocks 𝐵 without parents, we assume 𝐵. PIs [ ℓ 𝑗 ] ≔ new process P ( ℓ 𝑗 , 𝑠 𝑖 ) where 𝑠 𝑖 = 𝐵. n .This is effectively a simplification: we assume a running process instance ℓ 𝑗 for every 𝑠 𝑖 ∈ Srvrs . In an implementation,we would only start process instances for ℓ 𝑗 after receiving the first message or request for ℓ 𝑗 for 𝑠 𝑖 = 𝐵. n . Nowin our simplification, we start all process instances for every label at the genesis blocks and pass them on from theparent blocks. This leads us to our step case: 𝐵 has a parent. As 𝐵. parent ∈ 𝐵. preds , 𝐵. parent has been interpreted and An equivalent representation would keep process instances
PIs [ 𝐵, ℓ 𝑗 , 𝐵. n ] and message buffers Ms [ 𝐵, 𝑑 ∈ { in , out } , ℓ 𝑗 ] explicitly as global state. Webelieve that our notation accentuates the information flow throughout the G .8 mbedding a Deterministic BFT Protocol in a Block DAG moreover 𝐵. parent . n = 𝑠 𝑖 (Lemma A.13). Next, to advance the copied state on 𝐵 , 𝑠 processes (1) all incoming requests 𝑟 𝑗 given by 𝐵. rs in lines 5–6, and (2) all incoming messages from 𝐵 𝑖 . n to 𝐵. n given by 𝐵 𝑖 ⇀ 𝐵 in lines 8–11. For theformer (1), 𝑠 reads the labels and requests from the field 𝐵. rs . Here 𝑟 𝑗 is the literal transcription of the client’s originalrequest given to P . To give an example, if P is reliable broadcast, then 𝑟 𝑗 could read ’ broadcast ( ) ’ ( cf. Section 5).When interpreting, 𝑠 requests 𝑟 𝑗 from 𝐵. n ’s simulated protocol instance: 𝐵. PIs [ ℓ 𝑗 ] .𝑟 𝑗 . For the latter (2), 𝑠 collects (i) in 𝐵. Ms [ in , ℓ ] all messages for 𝐵. n from 𝐵 𝑖 . Ms [ out , ℓ ] where 𝐵 𝑖 ∈ 𝐵. preds in lines 8–9 and then feeds (ii) 𝑚 ∈ 𝐵. Ms [ in , ℓ ] to 𝐵. PIs [ ℓ ] in lines 10–11 in order < M . This (arbitrary) order is a simple way to guarantee that every server interpretingAlgorithm 2 will execute exactly the same steps. By feeding those messages and requests to 𝐵. PIs [ ℓ 𝑗 ] in lines 6 and 11 𝑠 computes (1) the next state of 𝐵. PIs [ ℓ 𝑗 ] and (2) the out-going messages from 𝐵. n in 𝐵. Ms [ out , ℓ 𝑗 ] . By construction, 𝑚. sender = 𝐵. n for 𝑚 ∈ 𝐵. Ms [ out , ℓ 𝑗 ] (Lemma A.14). Once, 𝑠 has completed this, 𝑠 marks 𝐵 as interpreted in line 12and can move on to the next eligible block. After 𝑠 interpreted 𝐵 , the simulated process instance 𝐵. PIs [ ℓ 𝑗 ] may indicate 𝑖 ∈ Inds . If this is the case, 𝑠 indicates 𝑖 for ℓ 𝑗 on behalf of 𝐵. n in lines 13–14. Note, that none of the steps used the factthat it was 𝑠 who interpreted 𝐵 ∈ G . So, for every 𝐵 , every 𝑠 ′ ∈ Srvrs will come to the exact same conclusion.But we glossed over one detail, 𝑠 actually had to take a choice—more than one 𝐵 may have been eligible in line 3.This is a feature: by having this choice we can think of interpreting a G ′ with G ′ > G as an ’extension’ of interpreting G . And, for two eligible 𝐵 and 𝐵 it does not matter if we pick 𝐵 before 𝐵 . Informally, this is because when we pick 𝐵 in line 3, only the the state with respect to 𝐵 is modified—and this state does not depend on 𝐵 (Lemma A.11).Another detail we glossed over is line 7: when interpreting 𝐵 , 𝑠 interprets the process instances of every ℓ 𝑗 relevanton 𝐵 at the same time . But again, because ℓ 𝑗 ≠ ℓ ′ 𝑗 are independent instances of the protocol with disjoint messages, i.e. 𝐵 𝑖 . Ms [ out , ℓ 𝑗 ] in line 9 is independent of any 𝐵 𝑖 . Ms [ out , ℓ ′ 𝑗 ] , they do not influence each other and the order in whichwe process ℓ 𝑗 does not matter.Finally, we give some intuition on how Byzantine servers can influence G and thus the interpretation of P . Whenrunning gossip , a Byzantine server ˇ 𝑠 can only manipulate the state of G by (1) sending an equivocating block, i.e. building a 𝐵 and 𝐵 ′ with ˇ 𝑠 = 𝐵. parent . n = 𝐵 ′ . parent . n . When interpreting 𝐵 and 𝐵 ′ , 𝑠 will split the state for ˇ 𝑠 and havetwo ’versions’ of PIs [ ℓ 𝑗 ] — 𝐵 ′ . PIs [ ℓ 𝑗 ] and 𝐵. PIs [ ℓ 𝑗 ] —sending conflicting messages for ℓ 𝑗 to servers referencing 𝐵 and 𝐵 ′ .But as P is a BFT protocol, the servers 𝑠 𝑖 simulating P (run by 𝑠 ) can deal with equivocation. Then ˇ 𝑠 could (2) referencea block multiple times, or (3) never reference a block. But again as P is a BFT protocol, the servers 𝑠 𝑖 simulating P candeal with duplicate messages and with silent servers.Going back to Algorithm 2, the main task of 𝑠 interpreting G is to get messages from one block and give them to thenext block. So we can see this interpretation of a block DAG as an implementation of a communication channel . Thatis, for a correct server 𝑠 executing 𝑠. interpret (G , P) (i) a server 𝑠 sends messages 𝑚 , . . . , 𝑚 𝑘 for a protocol instance ℓ 𝑗 in either line 6 or line 11 of Algorithm 2, and (ii) a server 𝑠 receives a message 𝑚 for a protocol instance ℓ 𝑗 in line 11of Algorithm 2. The next lemma relate the sent and received messages with the message buffers Ms and follows fromtracing the variables in Algorithm 2: Lemma 4.1.
For a correct server 𝑠 executing 𝑠. interpret (G , P) (1) a server 𝑠 sends 𝑚 for a protocol instance ℓ ′ iff there is a 𝐵 ∈ G with 𝐵 . n = 𝑠 such that 𝑚 ∈ 𝐵 . Ms [ out , ℓ ′ ] fora 𝐵 ′ ∈ G with ( ℓ ′ , 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 .(2) a server 𝑠 receives a message 𝑚 for protocol instance ℓ ′ iff there are some 𝐵 , 𝐵 ∈ G with 𝐵 ⇀ 𝐵 and 𝐵 . n = 𝑠 and 𝑚 ∈ 𝐵 . Ms [ in , ℓ ′ ] for a 𝐵 ′ ∈ G such that ( ℓ ′ , 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 . aria A Schett and George Danezis The following lemma shows our key observation from before: interpreting a block DAG is independent from theserver doing to interpretation. That is, 𝑠 and 𝑠 ′ will arrive at the same state when interpreting 𝐵 ∈ G . Lemma 4.2. If G G ′ then for every 𝐵 ∈ G , a deterministic protocol P and correct servers 𝑠 and 𝑠 ′ executing 𝑠. interpret (G , P) and 𝑠 ′ . interpret (G ′ , P) it holds that 𝐵. PIs [ ℓ 𝑗 ] = 𝐵. PIs ′ [ ℓ 𝑗 ] and 𝐵. Ms [ out , ℓ 𝑗 ] = 𝐵. Ms ′ [ out , ℓ 𝑗 ] for ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs with 𝐵 𝑗 ⇀ 𝑛 𝐵 for 𝑛 > . Proof.
In the following proof, when executing 𝑠 ′ . interpret (G ′ , P) we write Ms ′ and PIs ′ to distinguish from Ms and PIs when executing 𝑠. interpret (G , P) . We show 𝐵 . Ms [ out , ℓ 𝑗 ] = 𝐵 . Ms ′ [ out , ℓ 𝑗 ] and 𝐵 . PIs [ ℓ 𝑗 ] = 𝐵 . PIs ′ [ ℓ 𝑗 ] byinduction on 𝑛 —the length of the path from 𝐵 𝑗 to 𝐵 in G and G ′ . For the base case we have 𝐵 = 𝐵 𝑗 and ℓ 𝑗 ∈ { ℓ 𝑗 |( ℓ 𝑗 , 𝑟 𝑗 ) ∈ 𝐵 . rs } . By Lemma A.10, 𝐵 is picked eventually in line 3 of Algorithm 2 when executing 𝑠. interpret (G , P) .Then, by line 6 𝐵 . Ms [ out , ℓ ] is 𝐵 . PIs [ ℓ 𝑗 ] . ( 𝐵 . rs ) . By the same reasoning 𝐵 . Ms ′ [ out , ℓ ] = 𝐵 . PIs [ ℓ 𝑗 ] . ( 𝐵 . rs ) whenexecuting 𝑠 ′ . interpret (G ′ , P) . As 𝐵 . PIs [ ℓ 𝑗 ] . ( 𝐵 . rs ) are deterministic and depend only on 𝐵 , ℓ 𝑗 , and P , we know that 𝐵 . PIs [ ℓ ] = 𝐵 . PIs ′ [ ℓ ] and 𝐵 . PIs [ ℓ ] = 𝐵 . PIs ′ [ ℓ ] , and conclude the base case. For the step case by induction hypothesisfor 𝐵 𝑖 ∈ 𝐵 . preds with 𝐵 𝑗 ⇀ 𝑛 − 𝐵 𝑖 holds (i) 𝐵 𝑖 . Ms [ out , ℓ 𝑗 ] = 𝐵 𝑖 . Ms ′ [ out , ℓ 𝑗 ] , and (ii) 𝐵 𝑖 . PIs [ ℓ 𝑗 ] = 𝐵 𝑖 . PIs ′ [ ℓ 𝑗 ] . Again byLemma A.10, 𝐵 is picked eventually in line 3 of Algorithm 2 when executing 𝑠. interpret (G , P) and 𝑠 ′ . interpret (G ′ , P) .In line 4 and as 𝐵 . parent ∈ 𝐵 . preds and (ii) , now 𝐵 . PIs [ ℓ 𝑗 ] = 𝐵 . PIs ′ [ ℓ 𝑗 ] . Now, as P is deterministic, we onlyneed to establish that 𝐵 . Ms [ in , ℓ 𝑗 ] = 𝐵 . Ms ′ [ in , ℓ 𝑗 ] to conclude that 𝐵 . PIs [ ℓ 𝑗 ] = 𝐵 . PIs ′ [ ℓ 𝑗 ] and 𝐵 . Ms [ out , ℓ 𝑗 ] = 𝐵 . Ms ′ [ out , ℓ 𝑗 ] , which as ( ℓ 𝑗 , 𝑟 ) ∉ 𝐵 . rs , is only modified in this line 11. By Lemma A.9, we know for both executionsthat 𝐵 . Ms [ in , ℓ 𝑗 ] = 𝐵 . Ms ′ [ in , ℓ 𝑗 ] = ∅ , before 𝐵 is picked. Now, by (i) and line 9 𝐵 . Ms [ in , ℓ 𝑗 ] = 𝐵 . Ms ′ [ in , ℓ 𝑗 ] , andwe conclude the proof. (cid:3) A straightforward consequence of Lemma 4.2 is, that when in the interpretation of 𝑠 , a server 𝑠 sends a message 𝑚 for ℓ 𝑗 , then 𝑠 sends 𝑚 in the interpretation of 𝑠 ′ ( cf. Lemma A.16). Curiously, 𝑠 does not have to be correct: we know 𝑠 sent a block 𝐵 in G , that corresponds to a message 𝑚 in the interpretation of 𝑠 . now this block will be interpretedby 𝑠 ′ and the same message will be interpreted—and for that the server 𝑠 does not need to be correct By Lemma 4.3 interpret (G , P) has the properties of an authenticated perfect point-to-point link after [3, Module 2.5, p. 42]. Lemma 4.3.
For a block DAG G and a correct server 𝑠 executing 𝑠. interpret (G , P) (1) if a correct server 𝑠 sends a message 𝑚 for a protocol instance ℓ to a correct server 𝑠 , then 𝑠 eventually receives 𝑚 for protocol instance ℓ for a correct server 𝑠 ′ executing 𝑠 ′ . interpret (G ′ , P) and a block DAG G ′ > G ( reliabledelivery ).(2) for a protocol instance ℓ no message is received by a correct server 𝑠 more than once ( no duplication ).(3) if some correct server 𝑠 receives a message 𝑚 for protocol instance ℓ with sender 𝑠 and 𝑠 is correct, then themessage 𝑚 for protocol instance ℓ was previously sent to 𝑠 by 𝑠 ( authenticity ). Proof Sketch.
For (1), we observe that every message 𝑚 sent in 𝑠. interpret (G , P) will be sent in 𝑠 ′ . interpret (G ′ , P) for G ′ > G (by Lemma A.16). Now by Lemma 3.7, 𝑠 ′ will eventually have some G ′ > G . By Lemma 4.1 (1) we havewitnesses 𝐵 , 𝐵 ∈ G ′ with 𝐵 ⇀ 𝐵 , and by Lemma 4.1 (2) we found a witness 𝐵 to receive the message on whenexecuting 𝑠 ′ . interpret (G ′ , P) . For (2), we observe, that duplicate messages are only possible if 𝑠 inserted the block 𝐵 ,which gives rise to the message 𝑚 , in two different blocks built by 𝑠 . But this contradicts the correctness of 𝑠 (byLemma A.6). For (3), we observe that only 𝑠 can build and sign any block 𝐵 with 𝑠 = 𝐵. n , which gives rise to 𝑚 . (cid:3) mbedding a Deterministic BFT Protocol in a Block DAG module shim ( 𝑠 ∈ Srvrs , P ∈ module ) rqsts ≔ ∅ ∈ L× Rqsts G ≔ ∅ ∈ Dags gssp ≔ new process gossip ( 𝑠, G , rqsts ) intprt ≔ new process interpret (G , P) when request ( ℓ ∈ L , 𝑟 ∈ Rqsts ) rqsts . put ( ℓ, 𝑟 ) when intprt . indicate ( ℓ, 𝑖, 𝑠 ′ ) where 𝑠 ′ = 𝑠 indicate ( ℓ, 𝑖 ) repeatedly gssp . disseminate () Algorithm 3:
Interfacing between gossip , interpret and user of P .Before we compose gossip and interpret in the following next section under a shim , we highlight the key benefitsof using interpret in Algorithm 2. By leveraging the block DAG structure together with P ’s determinism, we can compress messages to the point of omitting some of them. When looking at line 11 of Algorithm 2, the messages in thebuffers Ms [ out , ℓ ] and Ms [ in , ℓ ] have never been sent over the network. They are locally computed, functional resultsof the calls receive ( 𝑚 ) . The only ’messages’ actually sent over the network are the requests 𝑟 𝑖 read from 𝐵. rs in line 6.To determine the messages following from these request, the server 𝑠 simulates an instance of protocol P for every 𝑠 𝑖 ∈ Srvrs —simply by simulating the steps in the deterministic protocol. However, not every step can be simulated: as 𝑠 does not know 𝑠 𝑖 ’s private key, 𝑠 cannot sign a message on 𝑠 𝑖 ’s behalf. But then, this is not necessary, because 𝑠 canderive the authenticity of the message triggered by a block 𝐵 from the signature of 𝐵 , i.e. , 𝐵.𝜎 . So instead of signingindividual messages, 𝑠 𝑖 can give a batch signature 𝐵.𝜎 for authenticating every message materialized through 𝐵 . Finally, 𝑠 interprets protocol instances with labels ℓ 𝑗 in parallel in line 7 of Algorithm 2. While traversing the block DAG, 𝑠 usesthe structure of the block DAG to interpret requests and messages for every ℓ 𝑗 . Now, the same block giving rise to arequest in process instance ℓ 𝑗 may materialize a message in process instance ℓ ′ 𝑗 . The (small) price to pay is the increaseof block size by references to predecessor blocks, i.e. , 𝐵. preds . We will illustrate the benefits again on the concreteexample of byzantine reliable broadcast in the next Section 5. The protocol shim (P) in Algorithm 3 is responsible for the choreography of the gossip protocol in Algorithm 1, the interpret protocol in Algorithm 2, and the external user of P . Therefore, the server 𝑠 executing shim (P) in Algorithm 3keeps track of two synchronized data structures (1) a buffer of labels and requests rqsts in line 2, and (2) and the blockDAG G in line 3. By calling rqsts . put ( ℓ, 𝑟 ) , 𝑠 inserts ( ℓ, 𝑟 ) in rqsts , and by calling rqsts . get () , 𝑠 gets and removes asuitable number of requests ( ℓ , 𝑟 ) , . . . , ( ℓ 𝑛 , 𝑟 𝑛 ) from rqsts . To insert a block 𝐵 in G , 𝑠 calls G . insert ( 𝐵 ) from Defini-tion 3.4. We tacitly assume these operations are atomic. When starting an instance of gossip and interpret in line 4and 5, 𝑠 passes in references to theses shared data structures. When the external user of protocol P requests 𝑟 ∈ Rqsts for ℓ ∈ L from 𝑠 via the request request ( ℓ, 𝑟 ) to shim (P) then 𝑠 inserts ( ℓ, 𝑟 ) in rqsts in lines 6–7. By executing gossip , 𝑠 writes ( ℓ, 𝑟 ) in B in Algorithm 1 line 15, and as eventually B ∈ G , 𝑟 will be requested from protocol instance PIs [ ℓ ] when 𝑠 executes line 6 in Algorithm 2 ( cf. Lemma A.17). On the other hand, when interpret indicates 𝑖 ∈ Inds , for theinterpretation of P for itself , i.e. , 𝑠 = 𝑠 ′ , then 𝑠 indicates to the user of P in line 8–9 of Algorithm 3 ( cf. Lemma A.18). aria A Schett and George Danezis For 𝑠 to only indicate when 𝑠 = 𝑠 ′ might be an over-approximation: 𝑠 trusts 𝑠 ’s interpretation of P as 𝑠 is correct for 𝑠 . We believe this restriction can be lifted ( cf. Section 7). Finally, as promised in Section 3, in lines 10–11 𝑠 repeatedlyrequests disseminate from gossip to disseminates B . Within the control of 𝑠 , the time between calls to disseminate canbe adapted to meet the network assumptions of P and can be enforced e.g. by an internal timer, the block’s payload,or when 𝑠 falls 𝑛 blocks behind. For our proofs we only need to guarantee that a correct 𝑠 will eventually request disseminate . Now, taking together what we have established for gossip in Section 3, i.e. that correct servers will even-tually share a joint block DAG, and that interpret gives a point-to-point link between them in Section 4, for shim (P) the following holds: Theorem 5.1.
For a correct server 𝑠 and a deterministic protocol P , if P is an implementation of (i) an interface I withrequests Rqsts P and indications Inds P using the reliable point-to-point link abstraction such that (ii) a property P holds,then shim (P) in Algorithm 3 implements (i) I such that (ii) P holds. Proof.
By Lemma A.17 and Lemma A.18, (i) shim (P) implements the interface I of Rqsts P and Inds P . For (ii) , byassumption P holds for P using a reliable point-to-point link abstraction. By Theorem 4.3 𝑠. interpret (G , P) implementsa reliable point-to-point link. As Algorithm 2 treats P as a black-box every 𝐵. PIs [ ℓ ] holds an execution of P . Assumethis execution violates P . But then an execution of P violates P which contradicts the assumption that P holds for P . (cid:3) Our proof relies on a point-to-point link between two correct servers and thus we can translate the argument ofall safety and liveness properties, for which their reasoning relies on the point-to-point link abstraction, to our blockDAG framework. However, because we provide an abstraction, we cannot guarantee implementation-level properties, e.g. for performance. They rely on the concrete implementation. Also, as discussed in Section 4, properties related tosignatures may not easily translate, because blocks and not messages are (batch-)signed. P ≔ byzantine reliable broadcast. In the remainder of this section, we will sketch how a user may use the blockDAG framework. Our example for P is byzantine reliable broadcast —a protocol underlying recently-proposed efficientpayment systems [2, 12]. Algorithm 4 in the appendix shows an implementation of byzantine reliable broadcast: thisis the P , which the user passes to shim (P) , i.e. in the block DAG framework P is fixed to Algorithm 4. The request inAlgorithm 4 is broadcast ( 𝑣 ) for a value 𝑣 ∈ Vals , so
Rqsts P = { broadcast ( 𝑣 ) | 𝑣 ∈ Vals } . For simplicity and generality,we assume that P —not shim (P) —authenticates requests, i.e. requests are self-contained and can be authenticatedwhile simulating P ( e.g. Algorithm 4 line 3). However, in an implementation shim (P) may be employed to authenticaterequests. On the other hand, Algorithm 4 indicates with deliver ( 𝑣 ) , so Inds P = { deliver ( 𝑣 ) | 𝑣 ∈ Vals } . The messagessent in Algorithm 4 are M P = { ECHO 𝑣, READY 𝑣 | 𝑣 ∈ Vals } where sender and receiver are the 𝑠 ∈ Srvrs running shim (P) . When executing line 9 of interpret (G , P) in Algorithm 2, then receive ( ECHO ) is triggered, and received ECHO holds in Algorithm 4 ( e.g. in line 6). As we assume P returns messages immediately, e.g. when the simulationreaches send ECHO , then ECHO is returned immediately ( e.g. in line 8 of Algorithm 4). Figure 4 shows a blockDAG for an execution of shim ( 𝑃 ) using byzantine reliable broadcast. It further explicitly shows the in - and out -goingmessages from Ms [ in , ℓ ] and Ms [ out , ℓ ] for a protocol instance ℓ and the request broadcast ( ) at block 𝐵 . Noneof these messages are ever actually sent over the network—every server interpreting this block DAG can use interpret in Algorithm 2 to replay Algorithm 4 and get the same picture. Figure 4 shows only the (unsent) messages for ℓ and broadcast ( )) in 𝐵 . rs , but 𝐵 . rs may hold more requests such as broadcast ( ) for ℓ , and all the messages of allthese requests could be materialized in the same manner—without any messages, or even additional blocks, sent. Andnot only 𝐵 holds such requests—also 𝐵 does. For example, 𝐵 . rs may contain broadcast ( ) for ℓ . Then, for ℓ on 𝐵 mbedding a Deterministic BFT Protocol in a Block DAG 𝑠 𝑠 𝐵 in = ∅ out = ECHO to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } 𝐵 𝐵 𝑠 𝐵 𝑠 𝐵 𝐵 in = ECHO from { 𝑠 ,𝑠 ,𝑠 } out = READY to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } 𝐵 𝐵 𝑘 𝑘 𝑘 in = ECHO from { 𝑠 } out = ECHO to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } in = ECHO from { 𝑠 } out = ECHO to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } in = ECHO from { 𝑠 } out = ECHO to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } in = ECHO from { 𝑠 } out = ECHO to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } in = ECHO from { 𝑠 ,𝑠 ,𝑠 } out = READY to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } in = ECHO from { 𝑠 ,𝑠 ,𝑠 } out = READY to { 𝑠 ,𝑠 ,𝑠 ,𝑠 } Fig. 4. The message buffers for process instance ℓ of a block DAG with ( ℓ , broadcast ( )) ∈ 𝐵 . rs materializes out = ECHO to 𝑠 , 𝑠 , 𝑠 , and again, without sending any messages, for ℓ on 𝐵 , 𝐵 , and 𝐵 materializes in = ECHO from 𝑠 . This is, of course, the same for every 𝐵 𝑖 .To recap, what makes interpreting P on a block DAG so attractive: sending blocks instead of messages in a deter-ministic P results in a compression of messages—up to their omission. And not only do these messages not have to besent, they also do not have to be signed. It suffices, that every server signs their blocks. Finally, a single block sent isinterpreted as messages for a very large number of parallel protocol instances. The last years have seen many proposals based on block DAG paradigms (see [21] for an SoK)—some with commercialimplementations. We focus on the proposals closest to our work:
Hashgraph [1],
Blockmania [7],
Aleph [11], and
Flare [20]. Underlying all of these systems is the same idea: first, build a common block DAG, and then locally interpretthe blocks and graph structure as communication for some protocol:
Hashgraph encodes a consensus protocol in blockDAG structure,
Blockmania [7] encodes a simplified version of PBFT [4],
Aleph [11] employs atomic broadcast andconsensus, and
Flare [20] builds on federated byzantine agreement from
Stellar [18] combined with block DAGs toimplement a federated voting protocol. Naturally, the correctness arguments of these systems focus on their system, e.g. the correctness proof in
Coq of byzantine consensus in
Hashgraph [6]. In our work, we aim for a different levelof generality: we establish structure underlying protocols which employ block DAGs, i.e. we show that a block DAGimplements a reliable point-to-point channel (Section 4). To that end, and opposed to previous approaches, we treatthe protocol P completely as a black-box, i.e. our framework is parametric in the protocol P .The idea to leverage deterministic state machines to replay the behavior of other servers goes back to PeerRe-view [13], where servers exchange logs of received messages for auditing to eventually detect and expose faulty be-havior. This idea was taken up by block DAG approaches—but with the twist to leverage determinism to not send thosemessages that can be determined. This allows compressing messages to the extent of only indicating that a messagehas been sent as we do in Section 4. However, we believe nothing precludes our proposed framework to be adapted tohold equivocating servers accountable, drawing e.g. on recent work from
Polygraph to detect byzantine behavior [5]. aria A Schett and George Danezis While our framework treats the interpreted protocol P as a black-box, the recently proposed threshold logical clockabstraction [10] allows the higher-level protocol to operate on an asynchronous network as if it were a synchronousnetwork by abstracting communication of groups. Similar to our framework, also threshold clocks rely on causalrelations between messages by including a threshold number of messages for next the time step. This would roughlycorrespond to including a threshold number of predecessor blocks. In contrast, our framework, by only providing theabstraction of a reliable point-to-point link to P , pushes reasoning about messages to P . We have presented a generic formalization of a block DAG and its properties, and in particular results relating to theeventual delivery of all blocks from correct servers to other correct servers. We then leverage this property to providea concrete implementation of a reliable point-to-point channel, which can be used to implement any deterministicprotocol P efficiently. In particular we have efficient message compression, as those messages emitted by P , which arethe results of the deterministic execution of P may be omitted. Moreover we are allowing for batching of the executionof multiple parallel instances of P using the same block DAG, and the de-coupling of maintaining the joint block DAGfrom its interpretation as instances of P . Extensions.
First, throughout our work we assume P is deterministic. The protocol may accept user requests, andemit deterministic messages based on these events and other messages. However, it may not use any randomness inits logic. It seems we can extend the proposed composition to non-deterministic protocols P —but some care needs tobe applied around the security properties assumed from randomness. In case randomness is merely at the discretionof a server running their instance of the protocol we can apply techniques to de-randomize the protocol by relyingon the server including in their created block any coin flips used. In case randomness has to be unbiased, as is thecase for asynchronous Byzantine consensus protocols, a joint shared randomness protocol needs to be embedded andused to de-randomize the protocol. Luckily, shared coin protocols that are secure under BFT assumptions and in thesynchronous network setting exist [15] and our composition could be used to embed them into the block DAG. Howeverwe leave the details of a generic embedding for non-deterministic protocols for future work.Second, we have discussed the case of embedding asynchronous protocols into a block DAG. We could extend thisresult to BFT protocols in the partial synchronous network setting [8] by showing that the block DAG interpretationnot only creates a reliable point-to-point channel but also that its delivery delay is bounded if the underlying networkis partially synchronous. We have a proof sketch to this effect, but a complete proof would require to introduce ma-chinery to reason about timing and, we believe, would not enhance the presentation of the core arguments behind ourabstraction.Third, our correctness conditions on the block DAG seem to be much more strict than necessary. For example, blockvalidity requires a server to have processed all previous blocks. In practice this results in blocks that must include atsome position 𝑘 all predecessors of blocks to be included after position 𝑘 . This leads to inefficiencies: a server mustinclude references to all blocks by other parties into their own blocks, which represents a 𝑂 ( 𝑛 ) overhead (admittedlywith a small constant, since a cryptographic hash is sufficient). Instead, block inclusion could be more implicit: when aserver 𝑠 includes a block 𝐵 ′ in its block 𝐵 all predecessors of 𝐵 ′ could be implicitly included in the block 𝐵 , transitivelyor up to a certain depth. This would reduce the communication overhead even further. Since it is possible to take a blockDAG with this weaker validity condition and unambiguously extract a block DAG with the stronger validity conditionwe assume, we foresee no issues for all our theorems to hold. Furthermore, when interpreting a protocol currently a mbedding a Deterministic BFT Protocol in a Block DAG server only indicates, when the server running the interpretation indicates in the interpretation. This is to assure thatthe server running the interpretation can trust the server in the interpretation, i.e. itself. Again, we believe that thiscan be weakened by leveraging properties of the interpreted protocol. However, we again leave a full exploration ofthis space to future work. Limitations.
Some limitations of our composition require much more foundational work to be overcome. And theselimitations also apply to the block DAG based protocols which we attempt to formalize. First, there are practicalchallenges when embedding protocols tolerating processes that can crash and recover. At first glance safe protocolsin the crash recovery setting seem like a great match for the block DAG approach: they do allow parties that recoverto re-synchronize the block DAG, and continue execution, assuming that they persist enough information (usuallyin a local log) as part of P . However there are challenges: first, our block DAG assumes that blocks issued haveconsecutive numbers. If the higher-level protocols use these block sequence numbers as labels for state machines (asin Blockmania ), a recovering process may have to ‘fill-in’ a large number of blocks before catching up with others. Analternative is for block sequence numbers to not have to be consecutive, but merely increasing, which would removethis issue.However in all cases, unless there is a mechanism for the higher level protocol P to signal that some informationwill never again be needed, the full block DAG has to be stored by all correct parties forever. This seems to be alimitation of both our abstraction of block DAG but also the traditional abstraction of reliable point-to-point channelsand the protocols using them, that seem to not require protocols to ever signal that a message is not needed any more(to stop re-transmission attempt to crashed or Byzantine servers). Fixing this issue, and proving that protocols canbe embedded into a block DAG, that can be operated and interpreted using a bounded amount of memory to avoidexhaustion attacks is a challenging and worthy future avenue for work – and is likely to require a re-thinking of howwe specify BFT protocols in general to ensure this property, beyond their embedding into a block DAG.Finally, one of the advantages of using a block DAG is the ability to separate the operation and maintenance of theblock DAG from the later or off-line interpretation of instances of protocol P . However, this separation does not holdand extend to operations that change the membership of the server set that maintain the block DAG—often referred toas reconfiguration. How to best support reconfiguration of servers in block DAG protocols seems to be an open issue,besides splitting protocol instances in pre-defined epochs. REFERENCES [1] Leemon Baird. 2016.
The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance . Technical Report. 28 pages.[2] Mathieu Baudet, George Danezis, and Alberto Sonnino. 2020. FastPay: High-Performance Byzantine Fault Tolerant Settlement. (April 2020).arXiv:2003.11506[3] Christian Cachin, Rachid Guerraoui, and Luís Rodrigues. 2011.
Introduction to Reliable and Secure Distributed Programming (second ed.). Springer-Verlag, Berlin Heidelberg.[4] Miguel Castro and Barbara Liskov. 1999. Practical Byzantine Fault Tolerance. In
Proceedings of the Third Symposium on Operating Systems Designand Implementation (OSDI ’99) . USENIX Association, Berkeley, CA, USA, 173–186.[5] Pierre Civit, Seth Gilbert, and Vincent Gramoli. 2020. Brief Announcement: Polygraph: Accountable Byzantine Agreement. (2020), 3.[6] Karl Crary. 2018. Verifying the Hashgraph Consensus Algorithm. (2018), 13.[7] George Danezis and David Hrycyszyn. 2018.
Blockmania: From Block DAGs to Consensus . Technical Report. arXiv:1809.01620[8] Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. 1988. Consensus in the Presence of Partial Synchrony.
J. ACM
35 (1988), 288–323.[9] Michael J. Fischer, Nancy A. Lynch, and Michael S. Paterson. 1985. Impossibility of Distributed Consensus with One Faulty Process.
Journal of theACM (JACM)
32, 2 (1985), 374–382.[10] Bryan Ford. 2019. Threshold Logical Clocks for Asynchronous Distributed Coordination and Consensus. (July 2019). arXiv:1907.0701015 aria A Schett and George Danezis [11] Adam Gągol, Damian Leśniak, Damian Straszak, and Michał Świętek. 2019. Aleph: Efficient Atomic Broadcast in Asynchronous Networks withByzantine Nodes. In
Proceedings of the 1st ACM Conference on Advances in Financial Technologies (AFT ’19) . Association for Computing Machinery,New York, NY, USA, 214–228. https://doi.org/10.1145/3318041.3355467[12] Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovic, and Dragos-Adrian Seredinschi. 2019. The Consensus Number of a Cryp-tocurrency (Extended Version).
Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing - PODC ’19 (2019), 307–316.https://doi.org/10.1145/3293611.3331589 arXiv:1906.05574[13] Andreas Haeberlen, Petr Kouznetsov, and Peter Druschel. 2007. PeerReview: Practical Accountability for Distributed Systems. In
Pro-ceedings of Twenty-First ACM SIGOPS Symposium on Operating Systems Principles (SOSP ’07) . ACM, New York, NY, USA, 175–188.https://doi.org/10.1145/1294261.1294279[14] Jonathan Katz and Yehuda Lindell. 2007.
Introduction to Modern Cryptography (Chapman & Hall/Crc Cryptography and Network Security Series) .Chapman & Hall/CRC.[15] Eleftherios Kokoris Kogias, Dahlia Malkhi, and Alexander Spiegelman. 2020. Asynchronous Distributed Key Generation for Computationally-Secure Randomness, Consensus, and Threshold Signatures.. In
Proceedings of the 2020 ACM SIGSAC Conference on Computer and CommunicationsSecurity (CCS ’20) . Association for Computing Machinery, New York, NY, USA, 1751–1767. https://doi.org/10.1145/3372297.3423364[16] Leslie Lamport. 1978. Time, Clocks, and the Ordering of Events in a Distributed System.
Commun. ACM
21, 7 (July 1978), 558–565.https://doi.org/10.1145/359545.359563[17] Petros Maniatis and Mary Baker. 2002. Secure History Preservation Through Timeline Entanglement. In
Proceedings of the 11th USENIX SecuritySymposium . USENIX Association, Berkeley, CA, USA, 297–312.[18] David Mazières. 2015.
The Stellar Consensus Protocol: A Federated Model for Internet-Level Consensus . Technical Report.[19] Alfred J. Menezes, Scott A. Vanstone, and Paul C. Van Oorschot. 1996.
Handbook of Applied Cryptography (1st ed.). CRC Press, Inc., Boca Raton,FL, USA.[20] Sean Rowan and Naïri Usher. 2019. The Flare Consensus Protocol: Fair, Fast Federated Byzantine Agreement Consensus. (2019), 10.[21] Qin Wang, Jiangshan Yu, Shiping Chen, and Yang Xiang. 2020. SoK: Diving into DAG-Based Blockchain Systems. (Dec. 2020). arXiv:2012.0612816 mbedding a Deterministic BFT Protocol in a Block DAG
A APPENDIXA.1 Ad Section 2: Background
Definition A.1.
Let 𝐴 → 𝐴 ′ be a secure cryptographic hash function. We write ( 𝑥 ) for the hash of 𝑥 ∈ 𝐴 , andwe write ( 𝐴 ) for 𝐴 ′ . By definition [19, p.332], for any it is computationally infeasible(1) to find any preimage 𝑚 such that ( 𝑚 ) = 𝑥 when given any 𝑥 for which a corresponding input is not known( preimage-resistance ),(2) given 𝑚 to find a 2nd-preimage 𝑚 ′ ≠ 𝑚 such that ( 𝑚 ) = ( 𝑚 ′ ) ( ), and(3) to find any two distinct inputs 𝑚 , 𝑚 ′ such that ( 𝑚 ) = ( 𝑚 ′ ) ( collision resistance ). Proof of Lemma 2.2 (1).
By definition of G and insert . (cid:3) Proof of Lemma 2.2 (2).
Let G ′ = insert (G , 𝑣, 𝐸 ) . By definition of insert , V G ⊆ V G ′ . Assume 𝑣 ∉ G . As 𝐸 containsonly edges such that ( 𝑣 𝑖 , 𝑣 ) where 𝑣 ∉ G , E G = E G ′ ∩ ( V G × V G ) holds. (cid:3) Proof of Lemma 2.2 (3).
By definition of 𝐸 and insert (G , 𝑣, 𝐸 ) only adds edges from vertices in G to 𝑣 . As 𝑣 ∉ G ,there is not edge ( 𝑣, 𝑣 𝑗 ) in G . By acyclicty of G , insert (G , 𝑣, 𝐸 ) is acyclic. (cid:3) Proof of Lemma 1.
By assumption 𝑠 considers 𝐵 valid, hence by lines 6–8 adds a reference to 𝐵 to B . As 𝑠 iscorrect, eventually will disseminate () , and then 𝑠 disseminates B in line 17. We refer to this disseminated B as 𝐵 ′ .By Assumption 1, every correct server will eventually receive 𝐵 ′ . Assume a correct server 𝑠 ′ , which has received 𝐵 ′ ,but has not received 𝐵 . As 𝑠 ′ has not received 𝐵 , by Definition 3.3 (iii) , 𝑠 ′ does not consider 𝐵 ′ valid. After time Δ 𝐵 ′ by lines 10–11 𝑠 ′ will request 𝐵 from 𝑠 by sending FWD 𝐵 . Again by Assumption 1, after 𝑠 receives FWD 𝐵 from 𝑠 ′ bylines 12–13, 𝑠 will send 𝐵 to 𝑠 ′ , which will eventually arrive, and 𝑠 ′ receives 𝐵 . (cid:3) A.2 Ad Section 3: Building a Block DAG
In this section we give the proofs—and lemmas those proofs rely on—which we omitted in Section 3. All proofs referto Algorithm 1. For the execution we assume, that the body of each handler is executed atomically and sequentiallywithin the handler.
Proof of Lemma 3.2.
Let 𝑥 = ref ( 𝐵 ) and 𝑥 = ref ( 𝐵 ) . By assumption, 𝑥 ∈ 𝐵 . preds . Assume towards a contra-diction that 𝑥 ∈ 𝐵 . preds . Then, to compute 𝑥 we need to know 𝑥 = ref ( 𝐵 ) . But this contradicts preimage-resistanceof ref . (cid:3) Lemma A.2.
For a block DAG G and a block 𝐵 ∈ G holds G = G . insert ( 𝐵 ) , i.e. insert is idempotent. Proof. As 𝐸 is fixed to {( 𝐵, 𝐵 ′ ) | 𝐵 ∈ 𝐵 ′ . preds } by definition of insert on block DAGs. Since 𝐵 ∈ G also {( 𝐵, 𝐵 ′ ) | 𝐵 ∈ 𝐵 ′ . preds } ⊆ E G by definition of block DAG. Thus, G . insert ( 𝐵 ) = G by Lemma 2.2 (1). (cid:3) Lemma A.3.
Let G be a block DAG for a server 𝑠 and let 𝐵 ′ be a block such that valid ( 𝑠, 𝐵 ′ ) holds and for all 𝐵 ∈ 𝐵 ′ . preds holds 𝐵 ∈ G . Let G ′ = G . insert ( 𝐵 ′ ) . Then G ′ is a block DAG for 𝑠 . Proof.
To show G ′ is a block DAG we need to show that G ′ adheres to Definition 3.4. For condition (i) we haveto show that 𝑠 considers all blocks in G ′ valid. All blocks in G ′ are—by definition of insert — V G ′ = V G ∪ { 𝐵 ′ } . As G is a block DAG for 𝑠 , valid ( 𝑠, 𝐵 ) holds for all 𝐵 ∈ V G and valid ( 𝑠, 𝐵 ′ ) follows from the assumption of the lemma. Forcondition (ii) we have to show that for every backwards reference to 𝐵 from the block 𝐵 ′ , the block dag G ′ contains aria A Schett and George Danezis 𝐵 and an edge from 𝐵 to 𝐵 ′ . The former—for all 𝐵 ∈ 𝐵 ′ . preds we have 𝐵 ∈ G —holds by assumption of the lemma. Thelatter— ( 𝐵, 𝐵 ′ ) ∈ E G ′ for 𝐵 ∈ 𝐵 ′ . preds — holds by definition of insert . As G is a block DAG, condition (ii) holds for everyblock in G . It remains to show, that G ′ is acyclic. If 𝐵 ′ ∈ G then by Lemma A.2, G ′ = G and G is acyclic. If 𝐵 ′ ∉ G then G ′ is acyclic by Lemma 3. (cid:3) Lemma A.4.
For every correct server 𝑠 executing gossip of Algorithm 1, whenever the execution reaches line 16 then valid ( 𝑠, B) holds. Proof.
We need to show, that once the execution reaches line 16 Definition 3.3 (i) – (iii) holds. As 𝑠 is correct andsigns B in line 15 (i) verify 𝜎 ( 𝑠, B .𝜎 ) holds. We prove (ii) and (iii) by induction on the times 𝑛 the execution reachesline 16. For the base case, B is (a) a genesis block with B . k = as initialized in line 2. Moreover B has no parent. As 𝑠 is correct and only inserts 𝐵 ′ in B . preds in line 8 whenever 𝑠 considers 𝐵 ′ valid in line 6, 𝑠 considers all 𝐵 ′ ∈ B . preds valid. In the step case, B 𝑛 + is updated in line 18. We show that (b) B 𝑛 + has exactly one parent B 𝑛 . By line 18, B 𝑛 + . n = B 𝑛 . n and B 𝑛 + . k = B 𝑛 . k + . As B 𝑛 is inserted B 𝑛 + . preds in line 18, by definition B 𝑛 + . parent = B 𝑛 . Byinduction hypothesis, 𝑠 considers B 𝑛 valid, and again, as 𝑠 is correct and only inserts 𝐵 ′ in B . preds in line 8 whenever 𝑠 considers 𝐵 ′ valid in line 6, (iii) 𝑠 considers all 𝐵 ′ ∈ B . preds valid. (cid:3) Lemma A.5.
For every correct server 𝑠 executing gossip of Algorithm 1 G is a block DAG. Proof.
We proof the lemma by induction on the times 𝑛 the execution reaches line 7 or line 16 of Algorithm 1. As G is initialized to the empty block DAG in Algorithm 3 in line 3, G is a block DAG for the base case 𝑛 = . In the stepcase, by induction hypothesis, G is a block DAG. By Lemma A.3 G . insert ( 𝐵 ′ ) is a block DAG if (i) valid ( 𝑠, 𝐵 ′ ) holds,and (ii) for all 𝐵 ∈ 𝐵 ′ . preds holds 𝐵 ′ ∈ G . The former (i) , valid ( 𝑠, 𝐵 ′ ) , holds either by line 6 or by Lemma A.4. As 𝑠 inserts any block 𝐵 which 𝑠 has received and considers valid by lines 6–8, for the latter (ii) it suffices to show that 𝑠 considers all 𝐵 ∈ 𝐵 ′ . preds valid. As 𝑠 considers 𝐵 ′ valid, by Definition 3.3 (ii) , 𝑠 considers all 𝐵 ∈ 𝐵 ′ . preds valid. (cid:3) Proof of Lemma 3.6 (1).
By assumption 𝑠 considers 𝐵 valid, hence by lines 6–8 adds a reference to 𝐵 to B . By 𝑠 iscorrect, 𝑠 eventually will disseminate () , and then 𝑠 disseminates B in line 17. We refer to this disseminated B as 𝐵 ′ .By Assumption 1, every correct server will eventually receive 𝐵 ′ . Assume a correct server 𝑠 ′ , which has received 𝐵 ′ ,but has not received 𝐵 . As 𝑠 ′ has not received 𝐵 , by Definition 3.3 (iii) , 𝑠 ′ does not consider 𝐵 ′ valid. After time Δ 𝐵 ′ by lines 10–11 𝑠 ′ will request 𝐵 from 𝑠 by sending FWD 𝐵 . Again by Assumption 1, after 𝑠 receives FWD 𝐵 from 𝑠 ′ bylines 12–13, 𝑠 will send 𝐵 to 𝑠 ′ , which will eventually arrive, and 𝑠 ′ receives 𝐵 . (cid:3) Proof of 3.6 (2).
We have to show, that valid ( 𝑠 ′ , 𝐵 ) eventually holds for all correct servers 𝑠 ′ . For Definition 3.3 (i) ,as 𝑠 considers 𝐵 valid and 𝑠 is correct, 𝐵 has a valid signature. This can be checked by every 𝑠 ′ . We show Defini-tion 3.3 (ii) (a) and (iii) by induction on the sum of the length of the paths from genesis blocks to 𝐵 . For the base case, 𝐵 does not have predecessors. As 𝑠 considers 𝐵 valid, then 𝐵 is a genesis block, and 𝑠 ′ will consider 𝐵 a genesis block,so Definition 3.3 (ii) (a) and (iii) hold. For the step case, let 𝐵 ′ ∈ 𝐵. preds . By Lemma 3.6 (1), every correct server 𝑠 ′ will eventually receive 𝐵 ′ . By induction hypothesis, 𝑠 ′ will eventually consider 𝐵 ′ valid. The same reasoning holds forevery 𝐵 ′ ∈ 𝐵. preds . It remains to show that 𝐵 has exactly one parent or is a genesis block. Again, this follows by 𝑠 considering 𝐵 valid. As 𝐵. parent ∈ 𝐵. preds 𝑠 ′ also considers 𝐵. parent valid. (cid:3) Lemma A.6.
For every 𝐵 every correct server 𝑠 executing gossip of Algorithm 1 inserts ref ( 𝐵 ) at most once in any block 𝐵 ′ with 𝐵 ′ . n = 𝑠 . mbedding a Deterministic BFT Protocol in a Block DAG Proof.
By line 4 of Algorithm 1, a correct server adds a block 𝐵 to blks only if 𝐵 ∉ G , and as blks is a set, 𝐵 appearsat most once in blks . Either 𝐵 remains in blks , or by lines 6–8, for any block 𝐵 ′ with 𝐵 ′ . n = 𝑠 , after ref ( 𝐵 ) is insertedin 𝐵 ′ , 𝐵 ∈ G holds. Thus, for no future execution 𝐵 ∉ G holds and therefore 𝐵 ∉ blks . As 𝑠 is correct, it will not enterlines 6–8 again for 𝐵 . (cid:3) Lemma A.7.
Let 𝑠 and 𝑠 ′ be correct servers with block DAGs G 𝑠 and G 𝑠 ′ . Then their joint block DAG G > G 𝑠 ∪ G 𝑠 ′ is ablock DAG for 𝑠 . Proof.
Let bs = 𝐵 , . . . , 𝐵 𝑘 − be blocks such that 𝐵 𝑖 ∈ G 𝑠 ′ but 𝐵 𝑖 ∉ G 𝑠 for 𝑖 < 𝑘 . We show the statement byinduction on | bs | . As G 𝑠 is a block DAG for 𝑠 , the statement holds for the base case. For the step case we pick a 𝐵 𝑖 ∈ bs such that 𝐵 𝑖 . preds ∩ bs = ∅ . Such a 𝐵 𝑖 exists, as in the worst case, G 𝑠 and G 𝑠 ′ are completely disjoint and 𝐵 𝑖 is a genesisblock in G 𝑠 . It remains to show that 𝑠 considers 𝐵 𝑖 valid and all 𝐵 𝑖 . preds are in G 𝑠 . Then by Lemma A.3 G 𝑠 . insert ( 𝐵 𝑖 ) is a block DAG and by induction hypothesis the statement holds. For all 𝐵 ′ ∈ 𝐵 𝑖 . preds holds 𝐵 ′ ∈ G 𝑠 by definition of bs . Moreover, as G 𝑠 is the block DAG of 𝑠 , 𝑠 considers every 𝐵 ′ valid. Then by (iii) of Definition 3.3, together with thefact that 𝑠 ′ is correct therefore (i) and (ii) hold for 𝑠 , 𝑠 considers 𝐵 𝑖 valid. (cid:3) Lemma A.8. If 𝐵 ∈ G for the block DAG G a correct server 𝑠 , then eventually for a block DAG G ′ of 𝑠 where G ′ > G holds 𝐵 ∈ G ′ and 𝐵 . n = 𝑠 and 𝐵 ⇀ 𝐵 . Proof.
For a correct server 𝑠 holds that 𝐵 ∈ G only after 𝑠 inserted 𝐵 either in line 7 or in line 16. Then by eitherline 8 or 18, respectively, 𝐵 ∈ B . preds for B . n = 𝑠 . As 𝑠 is correct 𝑠 will eventually request disseminate () and 𝑠 willreach line 16 for B and insert B to G for some G ′ > G . (cid:3) A.3 Ad Section 4: Interpreting a Protocol
In this section we give the proofs—and lemmas those proofs rely on—which we omitted in Section 3. All proofs referto Algorithm 2. For the execution we assume, that the body of each handler is executed atomically and sequentiallywithin the handler.
Lemma A.9.
For 𝐵 ∈ G if I [ 𝐵 ] = false then 𝐵. Ms [ 𝑑, ℓ ] = ∅ and 𝐵. PIs [ ℓ ] = ⊥ for ℓ ∈ L and 𝑑 ∈ { in , out } . Proof.
For every 𝐵 , ℓ ∈ L , and 𝑑 ∈ { in , out } , initially 𝐵. Ms [ 𝑑, ℓ ] = ∅ and 𝐵. PIs [ ℓ ] = ⊥ . Assume towards acontradiction that 𝐵. Ms [ 𝑑, ℓ ] ≠ ∅ or 𝐵. PIs [ ℓ ] ≠ ⊥ . As 𝐵. Ms [ 𝑑, ℓ ] and 𝐵. PIs [ ℓ ] are only modified in lines 4–12 after 𝐵 is picked in line 3, then by line 12 I[ 𝐵 ] = true contradiction I [ 𝐵 ] = false . (cid:3) Lemma A.10.
For 𝐵 ∈ G and a correct server executing interpret (G , P) in Algorithm 2 every 𝐵 is eventually picked inline 3. Proof.
To pick 𝐵 in line 3, eligible ( 𝐵 ) has to hold. By G finite and acyclic, every 𝐵 ∈ G is eligible ( 𝐵 ) eventually. (cid:3) Lemma A.11.
For a block 𝐵 ∈ G and an ℓ ∈ L , if I [ 𝐵 ] holds,(1) then 𝐵. Ms [ 𝑑, ℓ ] will never modified again for every 𝑑 ∈ { in , out } .(2) then 𝐵. PIs [ ℓ ] will never modified again. Proof.
For part 1, assume that 𝐵. Ms [ 𝑑, ℓ ] is modified. This can only happen in lines 6, 9, and 11 and only for 𝐵 picked in line 3. But as I [ 𝐵 ] , 𝐵 cannot be picked in line 3, leading to a contradiction. For part 2 assume that 𝐵. PIs [ 𝑑, ℓ ] is modified. This can only happen in lines 4 and 11, and only for 𝐵 picked in line 3. But as I[ 𝐵 ] , 𝐵 cannot be picked inline 3, leading to a contradiction. (cid:3) aria A Schett and George Danezis Lemma A.12. If 𝑚 ∈ 𝐵. Ms [ out , ℓ ] then there is a block 𝐵 ′ such that ( ℓ, 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 . Proof.
In Algorithm 2, 𝑚 ∈ 𝐵. Ms [ out , ℓ ] only after the execution reaches either (1) line 6, and then 𝐵 ′ = 𝐵 ,or (2) line 11, end then by line 7 exists a 𝐵 𝑗 such that ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs and ℓ ∈ { ℓ 𝑗 | ( ℓ 𝑗 , 𝑟 𝑗 ) ∈ 𝐵 𝑗 . rs ∧ 𝐵 𝑗 ∈ G∧ 𝐵 𝑗 ⇀ + 𝐵 } . (cid:3) Lemma A.13.
For all 𝐵. PIs [ ℓ ] ≠ ⊥ holds that 𝐵. PIs [ ℓ ] was started with P ( ℓ, 𝐵. n ) . Proof.
Either 𝐵 (i) is a genesis block, and then by assumption started with 𝐵. n and ℓ , (ii) 𝐵 has a parent and byline 4, PIs [ ℓ ] is copied from 𝐵. parent and as 𝐵. parent . n = 𝐵. n , 𝐵. PIs [ ℓ ] was initialized with 𝐵. n and ℓ (Lemma A.15). (cid:3) Lemma A.14. If 𝑚 ∈ 𝐵. Ms [ out , ℓ ] then 𝑚. sender = 𝐵. n . Proof.
By lines 6 and 11 of Algorithm 2 𝑚 ∈ 𝐵. Ms [ out , ℓ ] if either 𝑚 ∈ 𝐵. PIs [ ℓ ] . ( 𝐵. rs ) or 𝑚 ∈ 𝐵. PIs [ ℓ ] . receive ( 𝑚 ′ ) for some 𝑚 ′ of no importance. Important is, that 𝐵. PIs [ ℓ ] was initialized by 𝐵. n by Lemma A.13, and thus every out-going message 𝑚 has 𝑚. sender = 𝐵. n . It remains to show that every 𝐵 with 𝐵. n = 𝑠 was build by 𝑠 , which follows bythe signature 𝐵. n . (cid:3) Lemma A.15.
When the execution of interpret (G , P) reaches line 7 of Algorithm 2 then for all ℓ 𝑗 ∈ { ℓ 𝑗 | ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs ∧ 𝐵 𝑗 ∈ G ∧ 𝐵 𝑗 ⇀ ∗ 𝐵 } holds 𝐵. PIs [ ℓ 𝑗 ] ≠ ⊥ . Proof.
We show the statement by induction on the length of the longest path from the genesis blocks to 𝐵 . Thebase cases 𝑛 = holds by assumption, as PIs [ ℓ ] is started on every genesis blocks. For the step case, by inductionhypothesis the statement holds for 𝐵 𝑖 ∈ 𝐵. preds , and as 𝐵. parent ∈ 𝐵. preds by line 4 the statement holds. (cid:3) Proof of Lemma 4.1(1).
By definition 𝑠 sends 𝑚 for some protocol instance ℓ ′ if 𝑠 reaches in Algorithm 2 either inline 6 with 𝐵. rs , or line 11 with 𝐵. PIs [ ℓ ′ ] . receive ( 𝑚 ) for some 𝐵 picked in line 3. By Lemma A.15 𝐵. PIs [ ℓ ′ ] ≠ ⊥ and 𝐵. PIs [ ℓ ′ ] . n = 𝑠 by assumption, by Lemma A.13 𝐵. n = 𝑠 . 𝐵 will be our witness for 𝐵 . Now 𝑚 ∈ 𝐵. Ms [ out , ℓ ′ ] , by theassignment in either line 6 with ( ℓ ′ , 𝑟 ) ∈ 𝐵. rs (by line 5), or in line 11 with ( ℓ ′ , 𝑟 ) ∈ 𝐵 𝑗 . rs for some 𝐵 𝑗 ⇀ + 𝐵 (by line 7). 𝐵 𝑗 is our witness for 𝐵 ′ ≠ 𝐵 . For the other direction, we have 𝐵 ∈ G with 𝐵 . n = 𝑠 such that 𝑚 ∈ 𝐵 . Ms [ out , ℓ ′ ] for a 𝐵 ′ ∈ G with ( ℓ ′ , 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 . By Lemma A.10, eventually 𝐵 is picked in Algorithm 2 line 3. By assumption, 𝑚 ∈ 𝐵 . Ms [ out , ℓ ′ ] through either (i) line 6, or (ii) as 𝐵 ′ ⇀ + 𝐵 and thus ℓ ′ ∈ { ℓ 𝑗 | ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs ∧ 𝐵 𝑗 ∈ G ∧ 𝐵 𝑗 ⇀ + 𝐵 } from line 11. Then, by definition, 𝑠 sends 𝑚 for protocol instance ℓ ′ . (cid:3) Proof of Lemma 4.1(2).
By Definition 𝑠 receives 𝑚 in line 11 of Algorithm 2 for protocol instance ℓ ′ for some 𝐵 picked in line 3 and 𝑚 ∈ 𝐵. Ms [ in , ℓ ′ ] by line 10. By Lemma A.15 𝐵. PIs [ ℓ ′ ] ≠ ⊥ and 𝐵. PIs [ ℓ ′ ] . n = 𝑠 by assumption, byLemma A.13 𝐵. n = 𝑠 . 𝐵 is our witness for 𝐵 . Now by line 9 𝑚 ∈ 𝐵. Ms [ in , ℓ ′ ] only if 𝑚 ∈ 𝐵 𝑖 . Ms [ out , ℓ ′ ] for some 𝐵 𝑖 with 𝐵 𝑖 ⇀ 𝐵 . 𝐵 𝑖 is our witness for 𝐵 . Finally, by line 7, ℓ ′ ∈ { ℓ 𝑗 | ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs ∧ 𝐵 𝑗 ∈ G ∧ 𝐵 𝑗 ⇀ + 𝐵 } , and 𝐵 𝑗 is ourwitness for 𝐵 ′ . For the other direction we have 𝐵 , 𝐵 ∈ G with 𝐵 ⇀ 𝐵 and 𝐵 . n = 𝑠 and 𝑚 ∈ 𝐵 . Ms [ in , ℓ ′ ] for a 𝐵 ′ ∈ G such that ( ℓ ′ , 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 . By Lemma A.10, eventually 𝐵 is picked in Algorithm 2 line 3 and byassumptions eventually reaches line 11 of Algorithm 2. As 𝑚 ∈ 𝐵 . Ms [ in , ℓ ′ ] by definition, 𝑠 receives 𝑚 for protocolinstance ℓ ′ . (cid:3) Lemma A.16.
For a correct server 𝑠 executing 𝑠. interpret (G , P) if a server 𝑠 sends a message 𝑚 for a protocol instance ℓ 𝑗 , then 𝑠 sends 𝑚 for a correct server 𝑠 ′ executing 𝑠 ′ . interpret (G ′ , P) for a block DAG G ′ > G . Proof.
Again, in the following proof, when executing 𝑠 ′ . interpret (G ′ , P) we write Ms ′ and PIs ′ to distinguish from Ms and PIs when executing 𝑠. interpret (G , P) . As 𝑠 sends a message 𝑚 for a protocol instance ℓ 𝑗 , by Lemma 4.1(1) mbedding a Deterministic BFT Protocol in a Block DAG there is a 𝐵 ∈ G with 𝐵 . n = 𝑠 such that 𝑚 ∈ 𝐵 . Ms [ out , ℓ 𝑗 ] for a 𝐵 𝑗 ∈ G with ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs and 𝐵 𝑗 ⇀ 𝑛 𝐵 for 𝑛 > . By G ′ > G , 𝐵 ∈ G , 𝐵 𝑗 ∈ G , and the path 𝐵 𝑗 ⇀ 𝑛 𝐵 is in G ′ . By Lemma 4.2 𝑚 ∈ 𝐵 . Ms ′ [ out , ℓ 𝑗 ] , and then byLemma 4.1(1), 𝑠 sends s 𝑚 for a correct server 𝑠 ′ executing 𝑠 ′ . interpret (G ′ , P) . (cid:3) Proof of Lemma 4.3 1 (Reliable delivery).
By assumption 𝑠 sends a message 𝑚 to a correct server 𝑠 for a correctserver 𝑠 executing 𝑠. interpret (G , P) . By Lemma 3.7 𝑠 ′ will eventually have some G > G . Then by Lemma A.16, 𝑠 sends 𝑚 in 𝑠 ′ . interpret (G , P) for G > G Then by Lemma 4.1(1) there is a 𝐵 ∈ G with 𝐵 . n = 𝑠 such that 𝑚 ∈ 𝐵 . Ms [ out , ℓ 𝑗 ] for 𝐵 𝑗 ∈ G with ( ℓ 𝑗 , 𝑟 ) ∈ 𝐵 𝑗 . rs and 𝐵 𝑗 ⇀ ∗ 𝐵 . With 𝐵 we found our first witness. By Lemma A.8, there G > G such that 𝐵 ∈ G and 𝐵 . n = 𝑠 and 𝐵 ⇀ 𝐵 . Then by Lemma 3.7 eventually 𝑠 ′ will have some G ′ > G . By 𝑚 ∈ 𝐵 . Ms [ out , ℓ 𝑗 ] , 𝐵 ⇀ 𝐵 and 𝑚. receiver = 𝑠 by assumption, by lines 9–10 of Algorithm 2 we have 𝐵.𝑚 ∈ Ms [ in , ℓ 𝑗 ] .Now we have found our second witness 𝐵 . Now, by Lemma 4.1(2), 𝑠 receives 𝑚 in 𝑠 ′ . interpret (G ′ , P) (cid:3) Proof of Lemma 4.3 2 (No duplication).
Assume towards a contradiction, that 𝑠 received 𝑚 more than once.Then by Lemma 4.1(2) there are some 𝐵 , 𝐵 ∈ G with 𝐵 ⇀ 𝐵 , 𝐵 . n = 𝑠 and 𝑚 ∈ 𝐵 . Ms [ in , ℓ ] , and 𝐵 ′ ⇀ 𝐵 ′ , 𝐵 ′ . n = 𝑠 and 𝑚 ∈ 𝐵 ′ . Ms [ in , ℓ ] for a 𝐵 𝑗 ∈ G such that ( ℓ, 𝑟 ) ∈ 𝐵 𝑗 . rs and 𝐵 𝑗 ⇀ ∗ 𝐵 , but 𝐵 ≠ 𝐵 ′ . That 𝑠 receivedthe exact same message 𝑚 twice is only possible, if 𝐵 = 𝐵 ′ . That is, 𝑠 built 𝐵 ′ ≠ 𝐵 and inserted 𝐵 in both, whichcontradicts Lemma A.6 as 𝑠 is correct. (cid:3) Proof of Lemma 4.3.3 (Authenticity).
By Lemma 4.1(2) there are some 𝐵 , 𝐵 ∈ G with 𝐵 ⇀ 𝐵 and 𝐵 . n = 𝑠 and 𝑚 ∈ 𝐵 . Ms [ in , ℓ ] for a 𝐵 ∈ G such that ( ℓ, 𝑟 ) ∈ 𝐵 𝑗 . rs and 𝐵 𝑗 ⇀ ∗ 𝐵 . Then by line 9 of Algorithm 2 exist an 𝐵 𝑖 ∈ 𝐵 . preds such that 𝑚 ∈ 𝐵 𝑖 . Ms [ out , ℓ ] . As 𝑚 ∈ 𝐵 𝑖 . Ms [ out , ℓ ] by Lemma A.14 𝐵 𝑖 . n = 𝑚. sender and as 𝑚. sender = 𝑠 , 𝐵 𝑖 . n = 𝑠 . 𝐵 𝑖 will be our witness for 𝐵 . As 𝑚 ∈ 𝐵 𝑖 . Ms [ out , ℓ ] by Lemma A.12 there is a 𝐵 ′ such that ( ℓ, 𝑟 ) ∈ 𝐵 ′ . rs and 𝐵 ′ ⇀ ∗ 𝐵 𝑖 . 𝐵 ′ is our witness for 𝐵 𝑗 . Hence there is a 𝐵 ∈ G with 𝐵 . n = 𝑠 such that 𝑚 ∈ 𝐵 . Ms [ out , ℓ ] for a 𝐵 ∈ G with ( ℓ, 𝑟 ) ∈ 𝐵 𝑗 . rs and 𝐵 𝑗 ⇀ ∗ 𝐵 and by Lemma 4.1(1) 𝑠 𝑚 was sent by 𝑠 . (cid:3) A.4 Ad Section 5: Using the Framework
In this section we give the proofs which we omitted in Section 5. All proofs refer to Algorithm 3. For the executionwe assume, that the body of each handler is executed atomically. We further give an implementation of authenticateddouble-echo broadcast in Algorithm 4.
Lemma A.17.
For a correct server 𝑠 executing shim (P) , if request ( 𝑟, ℓ ) is requested from 𝑠 , then 𝑟 is requested in P . Proof.
By executing shim (P) , a correct 𝑠 inserts ( ℓ, 𝑟 ) in rqsts in line 6–7 of Algorithm 3. Then executing gossip ( 𝑠, G , rqsts ) , 𝑠 will eventually disseminate a block 𝐵 with 𝐵. n = 𝑠 and ( ℓ, 𝑟 ) ∈ 𝐵. rs in line 15 of Algorithm 1 and 𝐵 ∈ G after trigger-ing disseminate in lines 10–11 of Algorithm 3. Now, executing interpret (G , P) , 𝑠 for 𝐵 ∈ G will call 𝐵. PIs [ ℓ ] . rs line 6in Algorithm 2. (cid:3) Lemma A.18.
For a correct server 𝑠 executing shim (P) , if P indicates 𝑖 ∈ Inds P for 𝑠 , then shim (P) indicate ( ℓ, 𝑖 ) . Proof.
By assumption a correct 𝑠 indicates 𝑖 for ℓ and hence indicates in interpret (G , P) lines 13–14 of Algorithm 2.Then, by executing shim (P) , as 𝑠 = 𝑠 ′ indicate ( ℓ, 𝑖 ∈ Inds P ) by lines 8–9 of Algorithm 3. (cid:3) aria A Schett and George Danezis module broadcast ( 𝑠 ∈ Srvrs ) echoed , readied , delivered ≔ false broadcast ( 𝑣 ∈ Vals ) and authenticate ( 𝑣 ) echoed ≔ true send to ECHO 𝑣 to every 𝑠 ′ ∈ Srvrs when received ECHO 𝑣 and not echoed echoed ≔ true send ECHO 𝑣 to every 𝑠 ′ ∈ Srvrs when received ECHO 𝑣 from 𝑓 + different 𝑠 ′ ∈ Srvrs and not readied readied ≔ true send READY 𝑣 to every 𝑠 ′ ∈ Srvrs when received READY 𝑣 from 𝑓 + different 𝑠 ′ ∈ Srvrs and not readied readied ≔ true send READY 𝑟 to every 𝑠 ′ ∈ Srvrs when received from READY 𝑣 from 𝑓 + different 𝑠 ′ ∈ Srvrs and not delivered delivered ≔ true deliver ( 𝑣 ) Algorithm 4: