Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures (full version)
TTowards Establishing Monotonic Searchability inSelf-Stabilizing Data Structures ∗ Christian Scheideler , Alexander Setzer , and Thim Strothmann Abstract
Distributed applications are commonly based on overlay networks interconnecting their sites sothat they can exchange information. For these overlay networks to preserve their functionality,they should be able to recover from various problems like membership changes or faults. Variousself-stabilizing overlay networks have already been proposed in recent years, which have theadvantage of being able to recover from any illegal state, but none of these networks can giveany guarantees on its functionality while the recovery process is going on. We initiate researchon overlay networks that are not only self-stabilizing but that also ensure that searchability ismaintained while the recovery process is going on, as long as there are no corrupted messages inthe system. More precisely, once a search message from node u to another node v is successfullydelivered, all future search messages from u to v succeed as well. We call this property monotonicsearchability . We show that in general it is impossible to provide monotonic searchability ifcorrupted messages are present in the system, which justifies the restriction to system stateswithout corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line forwhich we can also show monotonic searchability. It turns out that even for the line it is non-trivialto achieve this property. Additionally, we extend our protocol to deal with node departures interms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocoleven capable of handling node dynamics. C.2.4 Distributed Systems
Keywords and phrases
Topological Self-Stabilization, Monotonic Searchability, Node Departures
The Internet has opened up tremendous opportunities for people to interact and exchangeinformation. Particularly popular ways to interact are peer-to-peer systems and socialnetworks. For these systems to stay popular, it is very important that they are highlyavailable. However, once these systems become large enough, changes and faults are not anexception but the rule. Therefore, mechanisms are needed that ensure that whenever thereare problems, they are quickly repaired, and all parts of the system that are still functionalshould not be affected by the repair process. Protocols that are able to recover from arbitrarystates are also known as self-stabilizing protocols. ∗ This work was partially supported by the German Research Foundation (DFG) within the CollaborativeResearch Center “On-The-Fly Computing” (SFB 901). © Christian Scheideler, Alexander Setzer, and Thim Strothmann;licensed under Creative Commons License CC-BYLeibniz International Proceedings in InformaticsSchloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany a r X i v : . [ c s . D C ] D ec Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures
Since the seminal paper of Dijkstra in 1974 [4], self-stabilizing protocols have beeninvestigated for many classical problems including leader election, consensus, matching,clock synchronization and token distribution problems. Recently, also various protocolsfor self-stabilizing overlay networks have been proposed (e.g., [14, 9, 6, 10, 5, 1, 11, 12, 2]).However, for all of these protocols it is only known that they eventually converge to thedesired solution, but the convergence process is not necessarily monotonic . In other words, itis not ensured for two points in time t, t with t < t that the functionality of the topology attime t is better than the functionality at time t .In this paper, we focus on protocols for self-stabilizing overlay networks that guaranteethe monotonic preservation of a characteristic that we call searchability , i.e., once a searchmessage from node u to another node v is successfully delivered, all future search messagesfrom u to v succeed as well. Searchability is a useful and natural characteristic for anoverlay network since searching for other participants is one of the most common tasks inreal-world networks. Moreover, a protocol that preserves monotonic searchability has thehuge advantage that in every state, even if the self-stabilization process has not convergedyet, the already built topology can already be used for search requests.As a starting point for rigorous research on monotonic searchability, we will focuson building a self-stabilizing protocol that preserves monotonic searchability for the linegraph. Although the topology itself is fairly simple, to preserve searchability during theself-stabilization process turns out to be quite challenging. Additionally, we study monotonicsearchability for the line graph if the node set is dynamic, i.e., nodes are allowed to leave thenetwork. We consider a distributed system consisting of a fixed set of nodes in which each node has aunique reference and a unique immutable numerical identifier (or short id). The system iscontrolled by a protocol that specifies the variables and actions that are available in eachnode. In addition to the protocol-based variables there is a system-based variable for eachnode called channel whose values are sets of messages. We denote the channel of node u as u.Ch and u.Ch contains all incoming messages to u . Its message capacity is unboundedand messages never get lost. A node can add a message to u.Ch if it has a reference to u .Besides these channels there are no further communication means, so only point-to-pointcommunication is possible.There are two types of actions. The first type of action has the form of a standardprocedure h label i ( h parameters i ) : h command i , where label is the unique name of thataction, parameters specifies the parameter list of the action, and command specifies thestatements to be executed when calling that action. Such actions can be called remotely.In fact, we assume that every message must be of the form h label i ( h parameters i ) where label specifies the action to be called in the receiving node and parameters contains theparameters to be passed to that action call. All other messages will be ignored by the nodes.Apart from being triggered by messages, these actions may also be called locally by thenodes, which causes their immediate execution. The second type of action has the form h label i : h guard i −→ h command i , where label and command are defined as above and guard is a predicate over local variables. We call an action whose guard is simply true a timeout action.The system state is an assignment of a value to every variable of each node and messagesto each channel. An action in some node p is enabled in some system state if its guardevaluates to true , or if there is a message in p.Ch requesting to call it. In the latter case the . Scheideler and A. Setzer and T. Strothmann 3 corresponding message is processed (in which case it is removed from p.Ch ). An action is disabled otherwise. Receiving and processing a message is considered as an atomic step.A computation is an infinite fair sequence of system states such that for each state s i ,the next state s i +1 is obtained by executing an action that is enabled in s i . This disallowsthe overlap of action execution. That is, action execution is atomic . We assume weaklyfair action execution and fair message receipt . Weakly fair action execution means that ifan action is enabled in all but finitely many states of the computation, then this actionis executed infinitely often. Note that the timeout action of a node is executed infinitelyoften. Fair message receipt means that if the computation contains a state where there isa message in a channel of a node that enables an action in that node, then that action iseventually executed with the parameters of that message, i.e., the message is eventuallyprocessed. Besides these fairness assumptions, we place no bounds on message propagationdelay or relative nodes execution speeds, i.e., we allow fully asynchronous computationsand non-FIFO message delivery. A computation suffix is a sequence of computation statespast a particular state of this computation. In other words, the suffix of the computationis obtained by removing the initial state and finitely many subsequent states. Note that acomputation suffix is also a computation.We consider protocols that do not manipulate the internals of node references. Specifically,a protocol is compare-store-send if the only operations that it executes on node referencesis comparing them, storing them in local memory and sending them in a message. That is,operations on references such as addition, radix computation, hashing, etc. are not used.In a compare-store-send protocol, if a node does not store a reference in its local memory,the node may learn this reference only by receiving it in a message. A compare-store-sendprotocol cannot introduce new references to the system. It can only operate on the referencesthat are already there.The overlay network of a set of nodes is determined by their knowledge of each other. Wesay that there is a (directed) edge from a to b , denoted by ( a, b ), if node a stores a reference of b in its local memory or has a message in a.Ch carrying the reference of b . In the former case,the edge is called explicit (drawn solid in figures), and in the latter case, the edge is called implicit (drawn dashed). With N G we denote the directed network (multi-)graph given by theexplicit and implicit edges.
EN G is the subgraph of
N G induced by only the explicit edges.A weakly connected component of a directed graph G is a subgraph of G of maximum size sothat for any two nodes u and v in that subgraph there is a (not necessarily directed) path from u to v . Two nodes that are not in the same weakly connected component are disconnected .We say a node a is to the left ( right , respectively) of a node b if id ( a ) < id ( b ) ( id ( a ) > id ( b )).If there is an edge ( a, b ) between the two, then a is a left neighbor ( right neighbor ). For threenodes a, b, c with id ( a ) < id ( b ) , id ( a ) < id ( c ) (or id ( a ) > id ( b ) , id ( a ) > id ( c ), respectively),we say a node b is closer to a than c , if | id ( a ) − id ( b ) | < | id ( a ) − id ( c ) | . If it is clear from thecontext we sometimes refer to the identifier of a node by dropping the id notation to , e.g.,we write a < b instead of id ( a ) < id ( b ).In this paper we are particularly concerned with search requests, i.e., Search( v, destID ) messages that are routed along EN G according to a given routing protocol, where v is thesender of the message and destID is the identifier of a node we are looking for. Note that destID does not necessarily belong to an existing node w , since we also want to modelsearch requests to not existing nodes. If a Search( v, destID ) message reaches a node w with id ( w ) = destID , the search request succeeds ; if the message reaches some node u with id ( u ) = destID and cannot be forwarded anymore according to the given routing protocol,the search request fails . We assume that nodes themselves initiate Search() requests at
Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures will. Therefore, the
Search( destID ) action is never explicitly called.We need some additional notation for our results of Section 4, in which we extendthe protocol to handle nodes that want to leave the system. A node u has a variable mode ∈ { leaving , staying } that is read-only. If this variable is set to leaving , the node is leaving ; the node is staying if the variable is set to staying . Note that staying nodes candynamically decide at any arbitrary state if they want to leave the system by executing acorresponding leave action . However, a leaving node cannot switch back to staying. Theultimate goal of a leaving node is to depart from the system. There is one special commandthat is important for the study of leaving nodes: exit . If a node executes exit it entersa designated exit state and all remaining edges to or from that node are deleted. We callsuch a node gone . A node that is not gone is called present . For a gone node all actions aredisabled, in particular it will not execute the timeout action regularly. A protocol is self-stabilizing if it satisfies the following two properties.
Convergence: starting from an arbitrary system state, the protocol is guaranteed to arriveat a legitimate state.
Closure: starting from a legitimate state the protocol remains in legitimate states thereafter.A self-stabilizing protocol is thus able to recover from transient faults regardless of theirnature. Moreover, a self-stabilizing protocol does not have to be initialized as it eventuallystarts to behave correctly regardless of its initial state. In topological self-stabilization weallow self-stabilizing protocols to perform changes to the overlay network, resp.
N G . Alegitimate state may then include a particular graph topology or a family of graph topologies.In this paper we want to build a self-stabilizing protocol for the linearization problem ,i.e., the nodes are sorted by identifiers and each node stores only two references: its closestsuccessor and its closest predecessor. From a global point of view, the nodes build a linegraph topology. Of course, searching is easy once a legitimate state has been reached.However, searching reliably during the stabilization phase is much more involved. We say a(self-stabilizing) protocol satisfies monotonic searchability according to some routing protocol R if it holds for any pair of nodes v, w that once a Search( v, id ( w ) ) request (that is routedaccording to R ) initiated at time t succeeds, any Search( v, id ( w ) ) request initiated at atime t > t will succeed. We do not mention R if it is clear from the context. A protocol issaid to satisfy non-trivial monotonic searchability if it satisfies monotonic searchability andin every computation of the protocol there is a suffix such that for each pair of nodes v, w for which there is a path from v to w in the target topology Search( v, id ( w ) ) requests willsucceed.Furthermore, we give a self-stabilizing protocol that satisfies non-trivial monotonicsearchability, solves the linearization problem and solves the Finite Departure Problem of [7].The following problem statement is adapted from [13]:
Finite Departure Problem ( F DP ) : In case the exit command is available, eventuallyreach a system state in which (i) every staying node is awake, (ii) every leaving node isgone and (iii) for each weakly connected component of the initial network graph, thestaying nodes in that component still form a weakly connected component.Consequently, a leaving node u should safely execute exit , i.e., the removal of u andits incident edges from N G does not disconnect any present nodes and does not violatesearchability. . Scheideler and A. Setzer and T. Strothmann 5
The idea of self-stabilization in distributed computing was introduced in a classical paperby E.W. Dijkstra in 1974 [4], in which he looked at the problem of self-stabilization in atoken ring. In order to recover certain network topologies from any weakly connected state,researchers started with simple line and ring networks (e.g. [16, 15, 8]. Over the years moreand more network topologies were considered, ranging from skip lists and skip graphs [14, 9],to expanders [6], Delaunay graphs [10], hypertrees and double-headed radix trees [5, 1],small-world graphs [11] and a Chord variant [12]. Also a universal algorithm for topologicalself-stabilization is known [2].Close to our work is the notion of monotonic convergence by Yamauchi and Tixeuil [17].A self-stabilizing protocol is monotonically converging if every change done by a node p makes the system approach a legitimate state and if every node changes its output only once.The authors investigate monotonically converging protocols for different classic distributedproblems (e.g., leader election and vertex coloring) and focus on the amount of non-localinformation that is needed for them.Our study of the Finite Departure Problem is heavily inspired by [7], in which the authorspropose the aforementioned problem to study graceful departures of nodes in a self-stabilizingsetting, i.e., nodes that want to leave a distributed system should decide when they canleave without affecting weak connectivity of the topology. They conclude that in generalit is not possible to solve the
FDP . However, with the use of distributed oracles (whichare specialized failure detectors [3]) the authors propose a protocol that solves the problemand arranges the nodes in a line. Additionally, they can show that oracles are not needed ifthe problem is transformed into a non-decision variant. In [13] the idea is generalized to aprotocol framework that solves the
FDP without being reliant on a certain topology and isthereby combinable with most existing overlay protocols.
To the best our knowledge, this paper presents the first attempt to have stricter requirementstowards the self-stabilization process in topological self-stabilization. We define and study monotonic searchability , which captures a typical use case for overlay networks, i.e., searchingother nodes. More formally, we want to guarantee for a self-stabilizing topology that oncea search message from node u to another node v is successfully delivered, all future searchmessages from u to v succeed as well. We focus on studying non-trivial monotonic searchabilityfor the list topology. First, we show that in general it is impossible to provide non-trivialmonotonic searchability from any initial system state, due to the presence of certain initialmessages. This justifies to study searchability only for so-called admissible system states inwhich these messages are not present anymore, as long as the protocol gurantees convergenceto these states. We give a self-stabilizing list protocol and an appropriate search protocol thatachieve the desired goal and prove their correctness. Moreover, we broaden the elaboratenessof the problem statement, by allowing nodes to leave the line topology, i.e., solving the FiniteDeparture Problem in addition to the aforementioned problems. Also for this combination ofproblems we present suitable protocols and prove their correctness. Since gone nodes will never execute any action, we only consider initial states in whichall nodes are present. We also restrict the initial state to contain only a finite number of
Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures messages that can trigger actions specified by our protocol, since other messages are ignoredby the nodes. Finally, we do not allow the presence of references that do not belong to anode in the system. From now on, an initial system state satisfies all of these constraints.The following propositions are restatements of results in [14] and imply further necessaryconditions on initial system states. If a compare-store-send program solves the linearization problem, each computation startsin a weakly connected initial state. If a compare-store-send program solves the linearization problem, each computation startsin a state in which all references belong to present nodes.A message invariant is a predicate of the following form: If there is a message m in theincoming channel of a node, then a predicate P must hold. A protocol may specify oneor more message invariants. An arbitrary message m in a system is called corrupted if theexistence of m violates one of the message invariants. A state s is called admissible if thereare no corrupted messages in s . We say a protocol admissible-message satisfies a property ifthe following two conditions hold: (i) in computations in which every state is admissible, itsatisfies the property, and (ii) starting from any initial state, there is a computation suffix inwhich every state is admissible. A protocol unconditionally satisfies a property if it satisfiesthis property starting from any state.With this notion in mind, we can show that admissible-message satisfaction is necessaryfor non-trivial monotonic searchability for any routing algorithm R . (cid:73) Lemma 1.
If a compare-store-send self-stabilizing protocol satisfies non-trivial monotonicsearchability then this protocol must be admissible-message satisfying.
The structure of the proof is as follows: we consider an arbitrary unconditionally satisfyingprotocol and show that it does not satisfy monotonic searchability by creating a bad instancefor this protocol. In particular, we exploit that our model does not ensure FIFO delivery ofmessages.
Proof.
Assume there is a compare-store-send self-stabilizing protocol that unconditionallysatisfies non-trivial monotonic searchability. First of all, note that if it violates only thesecond condition of admissible-message satisfiability, then there are computations in whichmonotonic searchability is never satisfied, implying that it cannot satisfy non-trivial monotonicsearchability. Thus, assume that the first condition is violated, i.e., the protocol satisfies theproperty in computations with arbitrary message, regardless of any invariants. Consider thenetwork given in Figure 1. u v w
Figure 1
Instance for this proof.
The implicit edge ( v, w ) is in v.Ch . We carry out the proof as a game between theprotocol and an adversary: based on the decisions of the protocol, the adversary may decideon the delivery speed of messages, and imitate additional messages at each node. The latteris possible since nodes can not distinguish between these messages and messages from aninitial state that have not been received yet. Furthermore, the adversary may set the initialstate of the nodes.At first, we issue a search ( u, w ) request in u that we denote by a in the following. Weargue that the adversary can force u to forward a to v . Therefore note the following: . Scheideler and A. Setzer and T. Strothmann 7 As long as u does not receive any further messages, u does not know any other node, so v is the only possible next hop for a . If u tries to wait for a certain amount of time before sending a , the adversary simplyhalts the system for that time, i.e., no messages are delivered in that timeframe and thesystem state stays the same. If u requires the receipt of another message in order to forward a , the adversary imitatesthis message at u . If u relies on its internal state to forward a , the adversary changes the initial state of u such that it does not forward any message, which contradicts the assumption thatnon-trivial monotonic searchability is satisfied. Therefore, u must not rely on its state toforward a . There are no other conditions that u can wait on.Therefore, u will send out a to v eventually. At the point in time when u does so, we issue asecond search ( u, w ) request in u . For similar reasons as stated above, b must be sent to v atsome point in time as well.Since both messages are in v.Ch and the adversary is allowed to decide message speeds,it lets v receive b first. Node v has no explicit edge to u and the adversary can enforce thatthe implicit edge ( v, w ) will not be received by v until v handles b . Therefore b must beanswered with “FAIL” at some point in time (since the b cannot be forwarded anymore) and u will be informed about that.Next, the adversary causes the edge ( v, w ) to arrive at v . Since the protocol must stabilizeto the line, at some point in time, the edge ( v, w ) will be established. Until then, theadversary withholds message a in v.Ch . Afterwards, when a arrives at v , it can be forwardedto w and thus correctly served.Therefore, message a succeeds, whereas message b that was sent after message a fails.This is a contradiction to the assumption that the protocol achieves non-trivial monotonicsearchability. (cid:74) Consequently, to prove non-trivial monotonic searchability for a protocol (according to agiven routing protocol R ) it is sufficient to show that: (i) the protocol has a computationsuffix in which every state is admissible and (ii) the protocol guarantees non-trivial monotonicsearchability according to R in admissible states.For the FDP , it was shown in [7], there is no distributed protocol within our model thatcan decide when it is safe for a node u to leave the system and thereby solve the FDP . Theauthors circumvent this impossibility result with the help of oracles. In general, an oracle isa predicate that depends on the current system state and the node calling it. In the contextof the
FDP , an oracle is supposed to advise a leaving node when it is safe to execute exit .We use the oracle
N IDEC as introduced in [7] in order to solve the
FDP . N IDEC evaluatesto true for a node u calling it, if no node v = u has a reference to u in its local memory orin a message in v.Ch and if u.Ch is empty. For an in depth discussion of oracles for the FDP , we refer the reader to [7, 13].
In this section, we present the
Build-List+ protocol and the
Search+ protocol.
Build-List+ solves the linearization problem and is admissible-message satisfying non-trivialmonotonic searchability according to
Search+ . Note that any protocol satisfying non-trivialmonotonic searchability must be admissible-message satisfying as shown in Section 2. Thissection is organized as follows: First, we describe
Build-List+ and
Search+ in detail
Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures (Subsection 3.1). Then, we prove that the
Build-List+ protocol solves the linearizationproblem (Subsection 3.2). Last, we prove that the
Build-List+ protocol satisfies non-trivialmonotonic searchability according to
Search+ (Subsection 3.3). From now on we drop the"according to
Search+ " clause, since we only consider searchability for
Search+ . The
Build-List+
Protocol builds upon the protocol introduced in [15] that solves thelinearization problem. For this protocol, every node only keeps a single left and rightneighbor. If a node u receives a reference of a node v with u < v ( u > v , respectively), u either saves v as its new right (left) neighbor if v is closer to u than the current right (left)neighbor w and delegates the reference of w to v or (in case v is not closer), v is not savedand delegated to w . Here, delegation means that the reference of a node is sent in a messageto another node and not kept in the local memory. A natural (local) search protocol for thistopology is to always forward search requests to the neighbor closest to the desired targetnode, or to abort the search request in case no such neighbor exists. Note that these easy andelegant protocols cannot guarantee monotonic searchability due to three simple facts: (i) dueto delegation, it is possible that an explicit edge ( u, v ) is replaced by an explicit edge ( u, w )and an implicit edge ( w, v ), (ii) consequently, u, v are not in the same weakly connectedcomponent in EN G (even though they were before delegation) and (iii) searchability isdefined for
EN G .The
Build-List+ protocol introduces the following changes in order to satisfy monotonicsearchability: Instead of having a single left and right neighbor, a node u has sets of neighbors Lef t and
Right (that it sorts implicitly according to id). In the following, whenever we usethe notation
Lef t ( u )/ Right ( u ), we refer to these sets of a node u . The main principle that weuse is that every node w does not delegate any edge to a node v stored in Lef t ( w ) or Right ( w )directly. Instead it first introduces (using Introduce( v, w ) ) this node to another node u ,waits for an acknowledgement that the edge has been added to Lef t ( u ) or Right ( u ) (which isbasically the Linearize( v ) message) and then delegates the edge to a node closer to v (using TempDelegate( v ) ). More specifically, whenever a node u has multiple neighbors to one side,it does not delegate edges to the closest neighbor directly, but does the following. W.l.o.g.assume that it has multiple neighbors w , . . . , w ‘ to the right with id ( w i ) < id ( w i +1 ). Inthe Timeout action u introduces w i to w i − , with an Introduce( w i , u ) message. Thereby, w i − knows that it got the reference from u , saves the reference to w i directly, sends a Linearize( w i ) message back to u and a TempDelegate( u ) to itself (the latter is only topreserve connectivity). Node u can now react to that Linearize( w i ) message, by deleting w i from its memory and sending the reference to the closest node to the left of w i in Right (which is not necessarily w i − anymore). Thereby, u preserves a path of explicit edgesbetween u and w i . Additionally, u sends its own reference to the closest neighbors with a Introduce( u, ⊥ ) message who turn this into a TempDelegate( u ) message. In general,the TempDelegate( u ) action is used to delegate an implicit edge to a node u into onedirection (i.e., to the left or to the right) as long as there is a node between the current nodeand u in Lef t or Right . Note that implicit edges are not used for search, thus we do nothave to apply the principle of introducing first and delegating afterwards for this kind ofedges. However, we have to delegate in order to preserve connectivity and to stabilize tothe line eventually. Note that, even though a node has temporarily more references thannecessary for the final line topology our protocol still eventually stabilizes to the line, as wewill show later. The pseudocode for all
Build-List+ actions is given in Listing 1. Notethat a node refers to itself with the expression self . Additionally, keep in mind that the . Scheideler and A. Setzer and T. Strothmann 9 timeout action is the only action that is not triggered as a result of another action. Instead,is triggered regularly.The
Search+ protocol works as follows: Whenever the
InitiateNewSearch( destID ) action is called at a node u , u creates a new Search( u, destID ) message and starts toperiodically initiate ForwardProbe( u, destID, { u } , self.seq ) messages that it sends toitself. In the following, assume id ( u ) < destID (the other case is analogous). Each ForwardProbe() message has a set of nodes, called
N ext attached to it, which contains thenodes the message will visit in its future. It also has a counter seq attached to it whose meaningwe will explain later. Whenever a
ForwardProbe( u, destID, N ext, seq ) message is at anode w , w removes itself from N ext and adds all its right neighbors x with id ( x ) ≤ destID to N ext . Then it forwards the
ForwardProbe( u, destID, N ext, seq ) message to the nodewith minimal id in N ext . If a
ForwardProbe( u, destID, N ext, seq ) message arrives ata node v with id ( v ) = destID , it directly responds with a ProbeSuccess( destID, seq, v ) message to u . However, if N ext is empty at a node w with id ( w ) = destID after w hasadded the aforementioned right neighbors, the ForwardProbe() message is answered witha
ProbeFail( destID, seq ) message. In any case, as soon as u receives the response, itacts accordingly: If the answer to a ForwardProbe( u, destID, N ext, seq ) message is a ProbeFail( destID, seq ) message, it drops the corresponding Search( u, destID ) messagecompletely. If the answer is ProbeSuccess( destID, v ) , Search( u, destID ) messageswaiting at u are directly sent to v .Note that if additional Search( u, destID ) messages are created at u while u is stillwaiting for an answer to an earlier initiated ForwardProbe( u, destID ) , these requestssimply wait together with the previous request (realized by simple W aitingF or [ destID ] field)and are aborted or sent as soon as the ProbeFail( destID ) or ProbeSuccess( destID, v ) response arrives at u , (i.e., search requests to the same destination are sent out in batchesif possible). Furthermore, note that nodes do not memorize whether they have alreadysent ForwardProbe() messages to a certain destination. Due to corrupt initial states,this knowledge could be wrong and nodes relying on this knowledge would wait forever.Therefore, nodes periodically send
ForwardProbe() messages, instead of only once. Notethat because we make no assumptions on the message delivery speed and channels are notFIFO, it is possible that
ProbeFail() messages arrive at a node u that are answers to ForwardProbe() messages initiated long ago. However, in the meantime, there mighthave been successful responses. To deal with this, each node u stores a sequence numbercounter seq . Whenever InitiateNewSearch( destID ) is executed by u and there is no Search( u, destID ) that waits for an answer to a ForwardProbe( u, destID, N ext, seq ) message, u increments u.seq , stores the new u.seq value in an entry for v and always attachesthe current sequence number ( u.seq ) to each ForwardProbe() message u sends. Responsesto probes (success and failure) sent by u also contain this sequence number. Whenever aresponse is sent back to u , u checks whether the sequence number in this message is atleast the sequence number stored for destID . If not, it simply drops the message, since inthat case, the answer belongs to a ForwardProbe() message sent for an earlier batch of
Search( u, destID ) messages that have already been processed. The complete pseudocodefor Search+ is given in Listing 2.In order to not unnecessarily blow up the pseudocode, we intentionally left out a sanitycheck for each node, i.e., before executing each action, each node u makes sure that Lef t only contains nodes v with v < u and that Right only contains nodes v with u < v . If this isnot the case for some node v , u rearranges the reference to v accordingly. This way, in everycomputation, the following lemma holds: Listing 1
Build-List+ protocol
Timeout for all destID ∈ W aiting send forwardP robe ( self, destID, { self } , self.seq ) to self // Let Left = { v , v , . . . , v k } with id ( v ) < id ( v ) < · · · < id ( v k ) for all v i ∈ Left with ≤ i < k send Introduce( v i , self ) to v i +1 // Let Right = { w , w , . . . , w l } with id ( w ) < id ( w ) < · · · < id ( w l ) for all w i ∈ Right with < i ≤ l send Introduce( w i , self ) to w i − send Introduce( self, ⊥ ) to v send Introduce( self, ⊥ ) to w Introduce( v, w ) if ( id ( v ) < id ( self ) )if { w = ⊥ } Left ← Left ∪ { v } send Linearize( v ) to w send TempDelegate( w ) to self else // w = ⊥ send TempDelegate( v ) to self else if ( id ( v ) > id ( self ) )// Analogous to the previous case . Linearize( v ) send TempDelegate( v ) to self if ( id ( v ) < id ( self ) )if ( Left = ∅ ) x ← argmax { id ( x ) | x ∈ Left } if ( v = x ) w ← argmin { id ( w ) | w ∈ Left und id ( w ) > id ( v ) } Left ← Left \ { v } send TempDelegate( v ) to w else if ( id ( v ) > id ( self ) )// Analogous to the previous case . TempDelegate( u ) if ( id ( u ) < id ( self ) )if ( Left = ∅ ) Left ← Left ∪ { u } else // Left = ∅ x ← argmax { id ( x ) | x ∈ Left } if ( id ( x ) < id ( u ) ) Left ← Left ∪ { u } else if ( id ( x ) > id ( u ) )send TempDelegate( u ) to x else if { id ( u ) > id ( self ) }// Analogous to the previous case . . Scheideler and A. Setzer and T. Strothmann 11 Listing 2
Search+ protocol
InitiateNewSearch( destID ) create new message m = Search( self, destID ) if ( W aitingF or [ destID ] = ∅ ) W aitingF or [ destID ] ← {} self.seq ← self.seq + 1 seq [ destID ] ← self.seq // Store the messages to W aitingF orW aitingF or [ destID ] ← W aitingF or [ destID ] ∪ { m } ForwardProbe( source, destID, Next, seq ) if ( destID = id ( self ) )if ( Next = ∅ )for all u ∈ Next send
TempDelegate( u ) to self send ProbeSuccess( destID, seq, self ) to source send TempDelegate( source ) to self else // destID = id ( self ) if ( destID > id ( self ) ) Next ← Next \ { self } ∪ { w ∈ Right | id ( w ) ≤ destID } if ( Next = ∅ )send ProbeFail( destID, seq ) to source send TempDelegate( source ) to self else // Next = ∅ u ← argmin { id ( u ) | u ∈ Next } if ( id ( u ) < id ( self ) )send TempDelegate( u ) to self else if ( id ( u ) < id ( argmin { id ( v ) | v ∈ Right } ) ) Right ← Right ∪ { u } send ForwardProbe( source, destID, Next, seq ) to u else if ( destID < id ( self ) )// Analogous to the previous case . ProbeSuccess( destID, seq, dest ) if ( seq ≥ seq [ destID ] )/* The message belongs to currently* stored search requests to dest . */send all m ∈ W aitingF or [ destID ] to destW aitingF or [ destID ] ← ∅ send TempDelegate( dest ) to self ProbeFail( destID, seq ) if ( seq ≥ seq [ destID ] )/* The message belongs to currently* stored search requests to dest . */ W aitingF or [ destID ] ← ∅ (cid:73) Lemma 2.
For every node v it holds: For all x ∈ Lef t , id ( x ) < id ( v ) , and for all y ∈ Right , id ( v ) < id ( y ) . In this section, we prove the following theorem: (cid:73)
Theorem 3.
Build-List+ is a self-stabilizing solution to the linearization problem.
We prove the theorem in three steps: First, we show that starting from any initial state inwhich
N G is weakly connected,
N G will always be weakly connected. Second, we show thatstarting from any initial state, there will be a state in which
EN G will be a supergraph ofthe line graph and that the explicit edges corresponding to the line will never be removed.Third, we prove that all superfluous explicit edges will eventually vanish.The first step is represented by the following lemma: (cid:73)
Lemma 4.
If a computation of
Build-List+ starts from a state where
N G is weaklyconnected then in every state,
N G remains weakly connected.
Proof.
First, note that in every action whenever a message with a reference to a node v isreceived by a node u then either v is added to the set Lef t ( u ) or Right ( u ) or a new messageis created with v as a parameter and sent to a node w ∈ Lef t ( u ) ∪ { u } ∪ Right ( u ). Thus,the implicit edge ( u, v ) is replaced by a path ( u, w, v ).Furthermore, the only action for that removes a reference to v from one of the sets Lef t ( u ) or Right ( u ) is the Linearize( v ) action. However, in Linearize( v ) , if v is removedfrom Lef t ( u ) or Right ( u ), v is also introduced to a node w in Lef t ( u ) or Right ( u ). Thus,the edge ( u, v ) is replaced by a path ( u, w, v ) in this case, too. (cid:74) For the second step of the proof of the theorem, we introduce the notation nextLef t ( u ) := argmax { id ( v ) | v ∈ Lef t ( u ) } and nextRight ( u ) := argmin { id ( v ) | v ∈ Right ( u ) } . Further-more, let length ( u, v ) for two nodes u and v denote the hop distance in the (ideal) line topologybetween u and v . We define rv ( v ) for a node v as length ( v, nextRight ( v )) if Right ( v ) = ∅ or as n if Right ( v ) = ∅ ; we define lv ( v ) analogously for nextLef t ( v ). With this, we define apotential function Φ := P n − i =1 rv ( v i ) + P ni =2 lv ( v i ) where v < v < · · · < v n are all nodesordered by their id increasingly. Notice that Φ is bounded from above by 2 n ( n −
1) and frombelow by 2( n − nextLef t ( v ) ( nextRight ( v )) canonly change if v puts a node closer to v than nextLef t ( v ) ( nextRight ( v )) into Lef t ( Right ).Thus, Φ never increases. We define the closest neighbor graph as the graph G NB = ( V, E NB )where V is the set of all nodes and ( x, y ) ∈ E NB iff y = nextRight ( x ) ∨ y = nextLef t ( x ).Furthermore, we say an edge is temporary if it is an implicit edge due to a TempDelegate() message. All other types of implicit edges are called non-temporary . One can show thefollowing: (cid:73)
Lemma 5.
Assume there is a system state such that Φ does not decrease in any furtherstep of the computation. Then G NB is bidirected and strongly connected. We prove this lemma step-by-step, starting with the following lemma: (cid:73)
Lemma 6.
Assume a system state such that Φ does not decrease in any further step of thecomputation. Then G NB is bidirected. . Scheideler and A. Setzer and T. Strothmann 13 Proof.
Assume for contradiction there exists an edge ( x, y ) ∈ E NB such that ( y, x ) / ∈ E NB and w.l.o.g. assume x < y . This implies nextRight ( x ) = y and x = nextLef t ( y ). Since Φdoes not change any more, y will remain nextRight ( x ) and eventually by the fair actionexecution assumption, Timeout will be executed in x and x will send an Introduce( x, ⊥ ) to y , which, by the fair message receipt assumption, will be eventually delivered to y .This implicit edge will turn into a temporary edge ( y, x ). Note that if Lef t ( y ) = ∅ or nextLef t ( y ) < x , then, according to the protocol and because x < y , nextLef t ( y ) will bereplaced by x causing Φ to decrease, which contradicts to the initial assumption. Therefore, Lef t ( y ) = ∅ and x < nextLef t ( y ) < y must hold. According to the protocol, ( y, x ) will bedelegated (first to nextLef t ( y ), then possibly further) until it reaches at a node z with z = ∅ nextLef t ( z ) < x < z . Here similar arguments as above yield a contradiction. Thus, G NB must be bidirected. (cid:74) The definition of a closest neighbor graph and Lemma 2 imply the following: (cid:73)
Corollary 7. If G NB is bidirected and disconnected, every connected component forms aline. To show that G NB is also strongly connected, we need two additional lemmata. We startwith the following: (cid:73) Lemma 8.
Assume that in a state of the computation of
Build-List+ G NB is bidirectedand disconnected. If there is a non-temporary edge ( w, v ) with w ∈ C , v / ∈ C for a connectedcomponent C , then eventually either there will be an explicit or a temporary edge ( x, y ) with x ∈ C and y / ∈ C or Φ will decrease. Proof.
W.l.o.g., assume w < v . First of all, note that according to the protocol, if the graph G NB changes, Φ must decrease. Since in that case we are done, in the following we assumethat G NB will never change. Furthermore, by Corollary 7, the connected components of G NB form a line. We now make a case distinction over all possible types of ( w, v ): ( w, v ) is an implicit edge from a ForwardProbe( m ) message in which v = source or v ∈ N ext and id ( w ) = destID . Then once the message is received, ( w, v ) will be turnedinto a temporary edge and the claim follows. ( w, v ) is an implicit edge from a ForwardProbe( m ) message in which v = source and destID > id ( w ). Consider the state in which this message is received and thecorresponding action is executed. Then N ext is updated according to the protocol. If
N ext is empty after this operation, a temporary edge ( w, v ) is established and the claimholds. Otherwise, let u := argmin { id ( u ) | u ∈ N ext } after the update. Note that if u > w ,we have two sub-cases: Either minRight ( w ) > u or minRight ( w ) ≤ u . In the formercase, u will be added to Right ( w ), causing Φ to decrease, and the claim holds. In thelatter case, due to the way N ext was updated, minRight ( w ) = u must hold. Applyingthe previous arguments recursively yields that the message will, at some point in time,reach at a node x ∈ C where either destID = id ( x ) or N ext = ∅ after the update. Inthis case, a temporary edge ( x, v ) will be established.Now, consider the case u < w . Again, we have two sub-cases: Either u / ∈ C or u ∈ C .In the former case, since the protocol establishes the temporary edge ( w, u ), the claimfollows. In the latter case, the message will be forwarded to u ∈ C . According to theprotocol, for u := argmin { id ( u ) | u ∈ N ext } after the update of N ext , it holds u > u .Thus, this case reduces to the other case above. ( w, v ) is an implicit edge from a ForwardProbe( m ) message in which v = source and destID < id ( w ). This case is analogous to the previous one. ( w, v ) is an implicit edge from a ForwardProbe( m ) message in which v ∈ N ext and destID > id ( w ). Note that in this case if a ForwardProbe( m ) message is delegatedfrom a node x to a node y < x , then a temporary edge ( x, y ) is also established. Then either y / ∈ C directly proving the claim, or y ∈ C . Observe that each ForwardProbe( m ) message can only be delegated from a node x to a node y < x once. Thus, either startingfrom the first or the second step, whenever a ForwardProbe( m ) message is delegatedfrom a node x to a node y , then y > x . Furthermore, note that the protocol assures y ∈ Right ( x ), i.e., y ∈ C as well. The only case when a ForwardProbe( m ) messageis no longer delegated is if N ext is empty (in which there is nothing left to prove), orwhen destID = id ( x ) for a node x . In the latter case, for each node remaining in N ext ,a temporary edge is created. ( w, v ) is an implicit edge from a ForwardProbe( m ) message in which v ∈ N ext and destID > id ( w ). This case is analogous to the previous one. ( w, v ) is an implicit edge from a ProbeSuccess() (in which v is Dest ) message and atemporary edge ( w, v ) will be established. ( w, v ) is an implicit edge from an Introduce() message. Note that according to theprotocol, all edges in an
Introduce() message are added either as explicit edges or astemporary edges. ( w, v ) is an implicit edge from a Linearize() message and ( w, v ) will be turned into atemporary edge. (cid:74)(cid:73)
Lemma 9.
Assume that in a state of the computation of
Build-List+ G NB is bidirectedand disconnected. If there is an explicit or a temporary edge ( w, v ) with w ∈ C and v / ∈ C for a connected component C , then eventually there will be an explicit or temporary edge ( x, y ) with x ∈ C , y / ∈ C and length ( x, y ) < length ( w, v ) , or Φ will decrease. Proof.
W.l.o.g., assume w < v . First, assume ( w, v ) is an explicit edge. If v = nextRight ( w ),we have a contradiction to the assumption w ∈ C and v / ∈ C . Thus w < nextRight ( w ) < v must hold. In this case, in Timeout a new edge ( x, v ) with w < x < v will be introduced andthe claim will hold. Second, assume that ( w, v ) is an implicit edge from a
TempDelegate() message. Then either v < nextRight ( w ) and ( w, v ) turns into an explicit edge and v becomes nextRight ( w ), causing Φ to decrease, or a TempDelegate( v ) message is sent to nextRight ( w ) resulting in a shorter edge ( nextRight ( w ) , v ). This completes the proof of thesecond claim. (cid:74) We are now ready to prove
Lemma 5 : Proof.
Assume there is an initial state in which Φ does not decrease anymore. Furthermore,assume that the closest neighbor graph G NB is disconnected. Firstly, Lemma 6 guaranteesthat G NB is bidirected. Furthermore, by Lemma 4, there must be at least one (implicit orexplicit) edge ( w, v ) between a connected component C and another connected component.Together with Lemma 8 this implies that at some point there must be a temporary or explicitedge ( x, y ) with x ∈ C and y / ∈ C . However, then Lemma 9 can be applied. Since there isonly a finite number of times that there can be a shorter edge, at some state, Φ must decrease,yielding a contradiction. Thus G NB must be weakly connected. Note that Lemma 6 impliesthat G NB is also strongly connected, yielding the claim of Lemma 5. (cid:74) Note that since Φ can never increase and since Φ is bounded from below, Φ can only decreasefor a finite number of states. After that, the conditions of Lemma 5 are fulfilled. This lemmaand Corollary 7 imply the following corollary: . Scheideler and A. Setzer and T. Strothmann 15 (cid:73)
Corollary 10.
For any computation of
Build-List+ , there is a state in which the graphformed by the explicit edges is a supergraph of the line topology.
For the third step of the proof of the theorem, we have the following lemma: (cid:73)
Lemma 11.
If a computation of
Build-List+ contains a state in which
EN G is asupergraph of the line topology, then there will be a suffix in which
EN G is the line topologyand no new explicit edges will ever be created again.
Proof.
For the proof, we introduce the following notatation: We say an implicit edge ( u, v )is right-relevant if u < v and the implicit edge ( u, v ) is due to a Introduce( v, w ) message in u.Ch for w = ⊥ . Accordingliy, we say an edge ( u, v ) is left-relevant if v < u and the implicigedge ( u, v ) is due to a Introduce( v, w ) message in u.Ch for w = ⊥ . Additionally, we callan explicit edge ( u, v ) superfluous if v = nextRight ( u ) ∧ v = nextLef t ( u ).Consider the state in which the graph formed by the explicit edges is a supergraph ofthe line topology. First of all, notice that according to the protocol, an explicit edge thatbelongs to the line topology will never be removed (because this would require a node u to get acquainted with a node v that is closer than minLef t ( u ) or minRight ( u ) which isnot possible). In addition, notice that according to the protocol, in every state (right-/left-)relevant edges are the only implicit edges that can be turned into an explicit edge anymore. Notice that a right-relevant edge ( u, v ) can only be created by a node w < u with asuperfluous explicit edge to v . Thus, for every node u it holds: if there is no node w < u with a relevant or superfluous edge ( w, u ), then there will never be a relevant or superfluousedge ( x, u ) with x < u again.Consider the leftmost node u that either has at least one right-relevant edge or at leastone superfluous right neighbor. Note that once all right-relevant edges have been receivedby u , then no node x ≤ u will ever add a superfluous right neighbor again. Furthermore,notice that right-relevant edges will be turned into explicit edges upon receipt. Now, forevery superfluous right neighbor v of u , u will send an Introduce( v, u ) to some node w ∈ Right ( u ). Each of these will eventually be received and, according to the protocol, beanswered with a Linearize( v ) message at u . This will cause u to delegate v to a node x > u . After the last superfluous edge has been delegated, no node x ≤ u will ever have asuperfluous right neighbor again.Continuing this approach, we can show that all superfluous right neighbors will eventuallyvanish. Using analogous arguments, we can also show that all superfluous left neighbors willeventually vanish. Thus, the lemma follows. (cid:74) Note that Corollary 10 and Lemma 11 imply that
Build-List+ converges to the list.Moreover, Lemma 11 yields the closure property. This finishes the proof of Theorem 3.
In this subsection we prove the following theorem: (cid:73)
Theorem 12.
Build-List+ admissible-message satisfies non-trivial monotonic searcha-bility according to
Search+ . We start with some preliminaries. First we define R ( v ) as the set of all nodes x with id ( v ) < id ( x ) for which there is a directed path from v to x consisting solely of explicit edges( y, z ) with id ( y ) < id ( z ). Furthermore, we define R ( v, ID ) := { x ∈ R ( v ) | id ( x ) ≤ ID } . Inaddition, we define L ( v ) as the set of all nodes x with id ( x ) < id ( v ) for which there is a directed path from v to x consisting solely of explicit edges ( y, z ) with id ( z ) < id ( y ). Fora set U , R ( U ) := U ∪ S u ∈ U R ( u ) and R ( U, ID ) := { x ∈ R ( U ) | id ( x ) ≤ ID } . Accordingly, L ( U ) := U ∪ S u ∈ U L ( u ) and L ( U, ID ) := { x ∈ L ( U ) | id ( x ) ≥ ID } .Moreover, we define a state as admissible if the following message invariants hold: If there is an
Introduce( v, w ) message with w = ⊥ in u.Ch , then v = w , and u ∈ R ( w )(or u ∈ L ( w )). If there is a
Linearize( v ) message in w.Ch , then there is a node u = v with u ∈ Right ( w )and v ∈ R ( u ) if w < v (or u ∈ Lef t ( w ) and v ∈ L ( u ) if v < w ). If there is a
ForwardProbe( source, destID, N ext, seq ) message in u.Ch , then a. id ( source ) < destID and ∀ x ∈ N ext : id ( x ) ≥ id ( u ) and u = argmin u { id ( u ) | u ∈ N ext } (alternatively destID < id ( source ) and ∀ x ∈ N ext : id ( x ) ≤ id ( u ) and u = argmax u { id ( u ) | u ∈ N ext } ). b. id ( source ) < destID and R ( next ) ⊆ R ( source ) (or destID < id ( source ) and u ∈ L ( source )). c. if v exists such that id ( v ) = destID and id ( source ) < destID and v / ∈ R ( N ext, destID )(or id ( source ) < destID and v / ∈ L ( N ext, destID )) then for every admissible statewith source.seq [ destID ] < seq , v / ∈ R ( source, destID ) ( v / ∈ L ( source, destID )). If there is a
ProbeSuccess( destID, seq, dest ) message in u.Ch , then id ( dest ) = destID and dest ∈ R ( u ) if destID > id ( u ) (or dest ∈ L ( u ) if destID < id ( u )). If there is a
ProbeFail( destID, seq ) message in u.Ch , then either there is no nodewith id destID , or for every admissible state with u.seq [ destID ] < seq , v / ∈ R ( u ) (and v / ∈ L ( u )), where v such that id ( v ) = destID . If there is a
Search( v, destID ) message in u.Ch , then id ( u ) = destID and u ∈ R ( v ) if id ( v ) < destID (or u ∈ L ( v ) if destID < id ( v )). (cid:73) Lemma 13.
If in a computation of
Build-List+ , there is an admissible state, then allsubsequent states are admissible.
In order to prove Lemma 13, we need the following lemmata: (cid:73)
Lemma 14.
If in a computation of
Build-List+ , the first two invariants hold, then inall subsequent states the first two invariants will hold.
Proof.
Assume there is a state s in which the first two invariants hold and such that inthe (direct) subsequent state s one of the first two invariants does not hold. Obviously,this can only be due to one of the following three reasons: First, a new Introduce( v, w ) message with w = ⊥ was sent to a node u with u / ∈ R ( w ) (and u / ∈ L ( w )) in s . Second, anew Linearize( v ) message was sent to a node w in s , but there is no node u = v with u ∈ Right ( w ) and v ∈ R ( u ) (or u ∈ Lef t ( w ) and v ∈ L ( u )). Third, a node y was removedfrom a set Right ( w ) (or Lef t ( w )). We show that all three cases cannot happen.For the first case, notice that according to the protocol, the only occasion when an Introduce( v, w ) message with w = ⊥ is sent is in the Timeout action of a node w . Here,it is only sent to nodes in Right ( w ) (or Lef t ( w )) and only with a first parameter v = w .For the second case, notice that according to the protocol, the only occasion when a Linearize( v ) message is sent to a node w is in an Introduce( v, w ) action at a node u .This must have been triggered by an Introduce( v, w ) message with w = ⊥ . Thus, beforethe action was executed, by the first invariant, u ∈ R ( w ) (or u ∈ L ( w )) and v = w wereboth fulfilled. This implies that there must be a node u ∈ Right ( w ), i.e., w < u such that u ∈ R ( u ) or u = u (or a node u ∈ Lef t ( w ), i.e., u < w , such that u ∈ L ( u ) or u = u ).During the execution of the action, v was added to Right ( u ) (or Lef t ( u )), which implies v ∈ R ( u ) (or v ∈ L ( u )). . Scheideler and A. Setzer and T. Strothmann 17 For the third case, note that a node y is only removed from Right ( w ) (or Lef t ( w )) if the Linearize( y ) action has been executed in w between s and s . However, by the secondinvariant, there must be a node u = y with u ∈ Right ( w ) and y ∈ R ( u ) (or u ∈ Lef t ( w ) and yinL ( u )). Thus, after the removal, y ∈ R ( w ) still holds.Therefore, in all three cases the first two invariants cannot be violated and have to holdin s , too. (cid:74)(cid:73) Lemma 15.
If there is a state in which the first two invariants hold, and x ∈ R ( v ) ( x ∈ L ( v ) ), then in every subsequent step, x ∈ R ( v ) ( x ∈ L ( v ) ). Proof.
We only consider the case x ∈ R ( v ), as x ∈ L ( v ) is completely analogous.Obviously, adding additional edges does not remove elements from R ( v ). The only actionthat delegates away an explicit edge ( y, z ) stored in Right ( y ) for some nodes y, z (and hencecould remove nodes from R ( v )) is the Linearize() action if y < z . Therefore, consider anarbitrary
Linearize( z ) action executed by y . Note that since we assumed that the first twoinvariants hold, right before Linearize( z ) is executed, it has to hold that there is a node u = z with u ∈ Right ( y ) and z ∈ R ( u ), by the second invariant. Consequently, after z isremoved from Right ( y ), z ∈ R ( y ) still holds. (cid:74)(cid:73) Lemma 16.
If in a computation of
Build-List+ , the first three invariants hold, then inall subsequent states the first three invariants will hold.
Proof.
Assume there is a state s in which the first three invariants hold and such that inthe (direct) subsequent state s one of the first three invariants does not hold. Note that byLemma 14 the first two invariants cannot be violated in s . Furthermore, by Lemma 15 andthe fact that u.seq [ id ] is monotonically increasing (according to the protocol), one can easilyshow that the only reason why Invariant 3 can be invalidated is that a new ForwardProbe() message is sent. In the following, we will only consider the case, where id ( source ) < destID ,as the other case is completely analogous.Assume a node x sends a ForwardProbe( source, destID, N ext, seq ) message to a node y . This may happen in two cases: Either in the Timeout action of a node x , or when x receives another ForwardProbe( source, destID, N ext , seq ) message and executes thecorresponding action. In the first case, N ext = { x } and it is easy to see that claim a) and b)of the third invariant are fulfilled. In the second case, both ∀ z ∈ N ext : id ( z ) ≥ id ( y ) and y = argmin u { id ( u ) | u ∈ N ext } hold, since (by the third invariant) (i) ∀ z ∈ N ext : id ( z ) ≥ id ( x ), and ∀ z ∈ Right ( x ) : id ( z ) ≥ id ( x ) (by Lemma 2), (ii) only nodes from Right ( x ) areadded to N ext , (iii) x was argmin u { id ( u ) | u ∈ N ext } and is not added to N ext , and (iv) y is selected as the minimum node from N ext . By the third invariant, x ∈ R ( source ), whichimplies Right ( x ) ⊆ R ( source ). Now, since R ( N ext ) ⊆ R ( source ) by the third invariant and N ext = N ext \ { x } ∪ Right ( x ), R ( N ext ) ⊆ R ( source ). Thus Invariant 3b) holds afterwards.For the third claim of the third invariant, we again distinguish between the messagebeing sent in Timeout or in the
ForwardProbe( source, dest, N ext , seq ) action. Inthe former case, notice that R ( N ext, destID ) = R ( source, destID ). Assume there hasbeen an admissible state in which source.seq [ destID ] < seq and v ∈ R ( source, destID )hold. Since source.seq [ destID ] is monotonically increasing, this must have been a previousstate. By Lemma 15, v ∈ R ( source, destID ) = R ( N ext, destID ) must still hold, yieldinga contradiction. In the latter case, assume v ∈ R ( N ext , destID ) (otherwise, Invariant 3c)trivially holds). Notice that due to Invariant 3b), x ∈ R ( source ). Since the only node that isin R ( N ext , destID ) but not in R ( N ext, destID ) is x , v ∈ R ( N ext, destID ) follows.Thus, the first three invariants still hold in s . (cid:74) (cid:73) Lemma 17.
If in a computation of
Build-List+ , the first five invariants hold, then inall subsequent states the first five invariants will hold.
Proof.
Assume there is a state s in which the first five invariants hold and such that inthe (direct) subsequent state s one of the first five invariants does not hold. Note thatby Lemma 16 none of the first three invariants can be violated in s . Furthermore, byLemma 15 and the fact that according to the protocol, u.seq [ id ] is monotonically increasing,one can check that the only reason for why Invariant 4, or 5 can be invalidated is that a new ProbeSuccess() , or
ProbeFail() message is sent. In the following, we will only considerthe case, id ( u ) < destID , as the other cases are completely analogous.First, we consider ProbeSuccess() messages. Hence, assume that a node x sends a ProbeSuccess( destID, seq, dest ) message to a node u . According to the protocol, this mayonly be in a ForwardProbe() action, when a
ForwardProbe( source, destID, N ext, seq ) message has arrived at x with id ( x ) = destID and u = source . By b) of the third invariant, dest ∈ R ( u ).For the ProbeFail() messages, assume a node x sends a ProbeFail( destID, seq ) message to a node u . According to the protocol, this may only be in a ForwardProbe() action, when a
ForwardProbe( source, destID, N ext, seq ) message has arrived at x with id ( x ) = destID , u = source and N ext = { x } and there is no y in Right ( x ) with id ( y ) ≤ destID . If no node with id destID exists, we are done. Otherwise, we have that v / ∈ R ( N ext, w ). By c) of the third invariant, this implies the claim.Therefore, the first five invariants have to hold in s , too. (cid:74) Using these lemmata, we can prove
Lemma 13 : Proof.
Assume there is an admissible state s such that in the (direct) subsequent state s is not admissible. Let s be the first such state. Note that by Lemma 17, none of the firstfive invariants can be violated in s . Furthermore, by Lemma 15 one can check that the onlyreason for why Invariant 6 can be invalidated is that a new Search() message, is sent. Inthe following, we will only consider the case, id ( u ) < destID , as the other case is completelyanalogous.Assume a node x sends a Search( v, destID ) message to a node u . According to theprotocol, x = v , and v must have received a ProbeSuccess( destID, seq, u ) , for which, byInvariant 4, id ( u ) = destID , and u ∈ R ( v ) must hold, i.e., the sixth invariant holds.Therefore, all invariants have to hold in s , too. (cid:74)(cid:73) Lemma 18.
In every computation of
Build-List+ there is an admissible state.
Proof.
According to Theorem 3, there is a state s in which and in every subsequent state,every node x has at most one node in Right ( x ) and at most one nide in Lef t ( x ). Notethat according to the protocol, any Introduce( v, w ) message with v = w is only sent froma node w with more than one in Right ( w ) or Lef t ( x ). Thus, by the fair message receiptassumption, there will be a state s after s , in which all such messages have been received.Further note that any Linearize( v ) message is only sent from a node u if u received an Introduce( v, w ) message, which cannot be the case in s . Thus, by the fair message receiptassumption, there will be a state s after s , in which all Linearize() message have beenreceived. This implies that the first two invariants hold in s . By Lemma 14, they will do soin every subsequent state.Next we show that starting from s , every ForwardProbe( source, destID, N ext, seq ) violating the third invariant will have vanished at some point in time. In the following we . Scheideler and A. Setzer and T. Strothmann 19 only consider such messages with id ( source ) < destID (the other case is analogous). First,notice that any ForwardProbe() message initiated in a
Timeout action by a node x cannot violate the third invariant. This is obvious for a) and b). For c), notice that if v with id ( v ) = destID exists and v / ∈ R ( N ext, w ) and there is an admissible state with x.seq [ destID ] < seq and v ∈ R ( x ), then according to the protocol this state must havebeen an earlier state and Lemma 15 implies that v ∈ R ( x ) in the current state, yielding acontradiction.Second, note that any existing ForwardProbe() message m can cause at most one other ForwardProbe() message m to be created when it is received by a node x . If this m doesnot violate the third invariant then since the first two invariants hold, m will also not violatethe third invariant (for reasons similar to those in the proof of Lemma 16). Thus, we willshow that every ForwardProbe() message that violates the third invariant can only causea finite number of
ForwardProbe() messages that violate the third invariant (which willeventually be received and thus disappear). First of all, note that every
ForwardProbe() message m violating Invariant 3a) cannot cause a ForwardProbe() message m violatingInvariant 3a) according to the protocol. Thus, after all initial ForwardProbe() messageshave been received, Invariant 3a) holds for every
ForwardProbe() message. Now, observethat any such
ForwardProbe() message which is received by a node x can only initiate anew ForwardProbe() message to a node y with id ( y ) > id ( x ), according to the protocol.Since there is only a finite number of nodes, this implies that all ForwardProbe() messageviolating Invariant 3 will eventually disappear.Now, consider the state s in which all of the first three invariants hold. Note that byLemma 16, they hold for all subsequent states, too. Notice that any ProbeSuccess() or ProbeFail() message in u.Ch for a node u cannot cause u to send a ProbeSuccess() or ProbeFail() message. The only only action in which a new
ProbeSuccess() or ProbeFail() message is sent is in the
ForwardProbe() action of a node. Such an actionrequires the receipt of a
ForwardProbe( source, destID, N ext, seq ) message m for which,by definition of s , the third invariant holds. Note that according to the protocol m can onlycause a ProbeSuccess( destID, seq, dest ) message m that is sent to sent to a node x , if id ( u ) = destID (i.e., dest = u ) and x = source . By Invariant 3b), u ∈ R ( source ), implying dest ∈ R ( x ), i.e., the fourth invariant holds regarding m . A ProbeFail( destID, seq, dest ) message m to a node x can only be caused by m if id ( u ) < destID and N ext \ { u } ∪{ w ∈ Right | id ( w ) ≤ destID } = ∅ , implying that v / ∈ R ( N ext, destID ) for a node v with id ( v ) = destID . By Invariant 3c), for every admissible state with source.seq [ destID ] < seq , v / ∈ R ( source, destID ), i.e., the fifth invariant holds regarding m . All in all, there is a state s such that all ProbeSuccess() or ProbeFail() messages that were in the incomingchannel of any node in s have been received and consequently, for all ProbeSuccess() and
ProbeFail() messages the fourth and fifth invariant will hold. By Lemma 17, they hold forall subsequent states, too.Consider this state s . Notice that Search( v, destID ) message can only be sent to anode u from a ProbeSuccess( destID, seq, u ) action in v , which requires the receipt of a ProbeSuccess( destID, seq, u ) message for which, by definition of s , the fourth invariantholds. This implies, destID = id ( u ) and u ∈ R ( v ), yielding Invariant 6 for the new message.Thus, in the state s after all Search() messages that were in the incoming channel of anynode in s have been received, all invariants hold, i.e., s is an admissible state. (cid:74) Lemma 13 and Lemma 18 imply the following Corollary 19. (cid:73)
Corollary 19.
In every computation of
Build-List+ , there exists a suffix in which everystate is admissible.
For the rest of this subsection, we assume that every computation starts in an admissible state,since we want to show monotonic searchability must hold starting from admissible states only.Furthermore, w.l.o.g., we only consider
Search( u, destID ) messages with id ( u ) < destID .Before we can prove Theorem 12, we need an additional result: (cid:73) Lemma 20.
For every message m = ForwardProbe( v, destID, N ext, seq ) ∈ u.Ch with id ( u ) < destID , it holds that if there is a node w with id ( w ) = destID and w ∈ R ( u ) , thenthere will be a state with m = ForwardProbe( v, destID, N ext , seq ) ∈ w.Ch . In order to prove Lemma 20, we need the following additional lemma: (cid:73)
Lemma 21.
Assume for a
ForwardProbe( v, destID, N ext, seq ) message m ∈ x.Ch ,there is a u ∈ R ( N ext, destID ) . Then either u = x or there will be a state in which a ForwardProbe( v, destID, N ext , seq ) message is in y.Ch for some node y with id ( y ) >id ( x ) and u ∈ R ( N ext , destID ) . Proof.
Note that when m is received by x , a new message with N ext = N ext \{ x }∪ Right ( x )will be sent. According to the third invariant, for all nodes z in N ext , id ( z ) ≥ id ( x ) holds,and x is the node with minimum id among all nodes in N ext . By Lemma 2, the same holds forthe nodes z in Right ( x ). Thus, x is the node with minimum id among all ones in R ( N ext, w )and for the node y to which a new ForwardProbe( v, destID, N ext , seq ) message is sentit holds that id ( y ) > id ( x ). Furthermore, R ( N ext ( x ) , destID ) \ { x } ⊆ R ( N ext , destID ).Thus, also u ∈ R ( N ext , destID ) and the claim follows. (cid:74) Using this, we can prove
Lemma 20 : Proof.
Note that when m arrives as u , N ext will be changed such that R ( N ext, w ) = R ( u, w ).If w ∈ R ( u ), then w ∈ R ( N ext, w ) afterwards. Thus, by applying Lemma 21 recursively,we have that eventually a
ForwardProbe( v, destID, N ext , seq ) is in w.Ch , which will bereceived according to the fair message receipt assumption. (cid:74) We are now ready to prove Theorem 12:
Proof.
Let m, m be two Search( u, destID ) messages initiated in u in admissible stateswith m being initiated before m and assume that m is delivered successfully, but m is not.Let v be such that id ( v ) = destID . Note that if m is added to the set W aitingF or [ destID ]when m is already in the set, then the protocol will handle both messages identical, i.e., if m is successfully delivered to v due to an ProbeSuccess() message, m is as well. There-fore, m is added to W aitingF or [ destID ] when m / ∈ W aitingF or [ destID ], which implies u.seq [ destID ] has increased since the successful delivery of m (according to the protocol).Since we assume that m is not delivered successfully, either a ProbeFail( dest, seq ) messageeventually arrives at u with seq ≥ u.s [ destID ], or no ProbeSuccess( destID, seq, dest ) with seq ≥ u.s [ destID ], dest = destID will ever arrive at u . We consider both cases in-dividually. In the first case, by the fifth invariant, v / ∈ R ( u ) has to hold even though m was already successfully delivered. By the sixth invariant, when m was delivered, v ∈ R ( u ), which is why this is a contradiction to Lemma 15. In the second case, notethat ForwardProbe( u, destID, { u } , seq ) messages are regularly initiated by u with seq ≥ u.s [ destID ] (since u.seq is monotonically increasing). Again, due to the successful deliveryof m , by the sixth invariant and Lemma 15, v ∈ R ( u ) when m was initiated, and therefore, . Scheideler and A. Setzer and T. Strothmann 21 by Lemma 20, a ForwardProbe( u, destID, N ext , seq ) message with seq ≥ u.s [ destID ]will eventually be in v.Ch , which will be answered with a ProbeSuccess( destID, seq, v ) message, causing m to be sent to v . By the fair message receipt assumption, this contradictsthe assumption that m is not successfully delivered. (cid:74) For the
Build-List+ protocol in Section 3 we implicitly assumed a static node set, i.e.,nodes are not allowed to leave or join the network. In this section we want investigatemonotonic searchability in terms of the
Finite Departure Problem ( FDP ) of [7]. Naturally, aleaving node does not execute
InitiateNewSearch() , since it aims at leaving the system.Additionally, a leaving node that is the destination of a
ForwardProbe() message, willdeliberately answer with
ProbeFail() . Consequently, monotonic searchability can only bemaintained for pairs of staying nodes.We note that the
FDP deliberately ignores that new nodes can join the network. However,this abstraction is justified in a self-stabilizing setting, since from an algorithmic point ofview for some node u a new node joining the network is the same as getting a message froma node that it has never been in contact with.In this section, we present the Build-List* and the
Search* protocols. In the followingsections, we further show that
Build-List* solves the
FDP (Section 4.2) and also thelinearization problem (Section 4.3), and extend the proofs of Section 3.3 to show that
Build-List* also satisfies non-trivial monotonic searchability according to
Search* (Section 4.4).
For two staying nodes that interact with each other,
Build-List* is analogous to
Build-List+ . Therefore, we only specify the changes in case a node itself is leaving or receives amessage from a leaving node. A leaving node distinguishes between two different kinds ofneighbors: those that it already had before switching to the leaving mode (which are
Lef t and
Right from
Build-List+ ) and those which it received while being leaving (
T emp L and T emp R ). Searchability is only preserved for nodes in the former two sets.For the ForwardProbe() , Introduce() , Linearize() and
TempDelegate() actions,a leaving node u will always save nodes in T emp L and T emp R in cases where a stayingnode saves them in Lef t and
Right . In its
Timeout action, a leaving node u either in-troduces all its neighbors to each other and executes exit if N IDEC is true or it sendsa
ReverseAndLinearizeREQ() message to all neighbors. With this
ReverseAndLin-earizeREQ(dir) message u requests all neighbors to stop holding its reference. As it wasshown in [7], leaving nodes should never send their own reference for a successful departureprotocol. Therefore, a ReverseAndLinearizeREQ(dir) message only contains a value dir ∈ { lef t, right } that indicates whether a left or right neighbor should be removed, i.e., u sends a ReverseAndLinearizeREQ(left) message to all its neighbors to the right andand a
ReverseAndLinearizeREQ(right) message to all its neighbors to the left. If a node v receives a ReverseAndLinearizeREQ(dir) message, there are two possible scenarios. If v is staying, it sends a ReverseAndLinearizeACK(v,uniqueValue) message to all neigh-bors in the given direction, which contains its own reference and for each neighbor a uniquelycreated value (i.e., in our case a local counter or the id of a node would be sufficient). This val-ues is also saved as satellite data by v at the corresponding node reference in the neighbor set.If v is leaving, it behaves like a staying node if the dir is right; otherwise it ignores the request.Thereby, leaving nodes with a higher id are given a higher priority for exiting the system. Once a leaving node u receives a ReverseAndLinearizeACK(v,uniqueValue) message, itresponds with
ReverseAndLinearize( nodeList, uniqueV alue ) message that contains thereceived unique value (for identification purposes) and also all its neighbors that are on theopposite of the node in the message (i.e., if the received node is to the right of u , u sends allleft neighbors and vice-versa). A ReverseAndLinearizeACK(v,uniqueValue) message isignored by a staying node, meaning that it is transformed into a
TempDelegate( v ) to itself.Finally, the ReverseAndLinearize( nodeList, uniqueV alue ) message is received by v and v checks if it has a neighbor with the given unique value. If this is the case, v either finishes thereversal process by deleting the reference to u and saving the newly received neighbors (if v isstaying or getting the ReverseAndLinearize( nodeList, uniqueV alue ) message from a rightneighbor) or v ignores the message by simply saving all nodes in T emp L (if v is leaving and get-ting the ReverseAndLinearize( nodeList, uniqueV alue ) message from a left neighbor). Incase the unique value does not match, the ReverseAndLinearize( nodeList, uniqueV alue ) message is not a response to a former ReverseAndLinearizeACK(v,uniqueValue) message and all received nodes are processed by
TempDelegate() messages to v itself.The Search* protocol is very similar to the
Search+ protocol. As already men-tioned, leaving nodes will neither execute
InitiateNewSearch() , nor will they send out a
ProbeSuccess() message. In fact the only action that is different in multiple places is the
ForwardProbe() action, since we have to make sure that references are not saved in
Lef t and
Right but in
T emp L and T emp R .Similar to Build-List+ , Build-List* performs a sanity check for
T emp L , T emp R , Lef t and
Right before each action. The same is done for the nodeList received in a
ReverseAndLinearize() message. However, in the last case a failing sanity check (i.e.,the nodes in nodeList are from two different sides of the current node) directly implies thatthe message is corrupt and it is safe to process the nodes with
TempDelegate() . Thepseudocode for
Build-List* and
Search* is presented in Algorithms 3 and 5.
Listing 3
Build-List* protocol
Timeout if ( self.mode = staying ) // See Algorithm .elseif ( N IDEC )for all v ∈ Left ∪ Right ∪ T emp L ∪ T emp R for all w ∈ Left ∪ Right ∪ T emp L ∪ T emp R send v. Introduce( w, ⊥ ) to v send Introduce( v, ⊥ ) to w exit elsefor all v ∈ Left ∪ T emp L send ReverseAndLinearizeREQ(right) to v for all w ∈ Right ∪ T emp R send ReverseAndLinearizeREQ(left) to w Introduce( v, w ) if ( id ( v ) < id ( self ) )if ( self.mode = staying ) // See Algorithm .elseif ( v / ∈ Left } T emp L ← T emp L ∪ { v } if { w = ⊥ ∧ w / ∈ Left } T emp L ← T emp L ∪ { w } . Scheideler and A. Setzer and T. Strothmann 23 else if ( id ( v ) > id ( self ) ) // Analogous to the previous case . Linearize( v ) if ( id ( v ) < id ( self ) )if ( self.mode = staying )// See Algorithm .else T emp L ← T emp L ∪ { v } else if ( id ( v ) > id ( self ) ) // Analogous to the previous case . TempDelegate( u ) if ( id ( u ) < id ( self ) )if ( Left = ∅ )if ( self.mode = staying ) Left ← Left ∪ { u } else T emp L ← T emp L ∪ { u } else x ← argmax { id ( x ) | x ∈ Left } if ( id ( x ) < id ( u ) )if ( self.mode = staying ) Left ← Left ∪ { u } else T emp L ← T emp L ∪ { u } elsesend TempDelegate( u ) to x else if ( id ( u ) > id ( self ) ) // Analogous to the previous case . Listing 4
Build-List* protocol (continued)
ReverseAndLinearizeREQ(dir) if { dir = right }for all v ∈ Right ∪ T emp R if ( uniqueV alues [ v ] = ⊥ ) // i.e., v does not exist in uniqueValues ./* Assume that generateUniqueValue () creates a unique value . uniqueV alues [ v ] = self.generateUniqueV alue () send ReverseAndLinearizeACK(self, uniqueValues[v]) to v else if ( dir = left ∧ self.mode = staying )// Analogous to the previous case . ReverseAndLinearizeACK(v,uniqueValue) if ( id ( v ) < id ( self ) )if ( self.mode = leaving ) T emp L ← T emp L ∪ { v } send ReverseAndLinearize(
Right, uniqueV alue ) to v elsesend TempDelegate( v ) to self else if ( id ( v ) > id ( self ) )// Analogous to the previous case . ReverseAndLinearize( nodeList, uniqueV alue ) if ( ∃ v ∈ Left ∪ Right ∪ T emp L ∪ T emp R with uniqueV alues [ v ] = uniqueV alue )if ( self.mode = staying )if ( id ( v ) < id ( self ) ) Left ← Left ∪ nodeList Left ← Left \ { v } send Introduce( self, ⊥ ) to v else if ( id ( v ) > id ( self ) )// Analogous to the previous case .else // self.mode = leaving if ( id ( v ) < id ( self ) ) T emp L ← T emp L ∪ nodeList else // id ( v ) > id ( self ) if ( v ∈ Right ) Right ← Right ∪ nodeListRight ← Right \ { v } else T emp R ← T emp R ∪ nodeListT emp R ← T emp R \ { v } send Introduce( self, ⊥ ) to v elsefor all u ∈ nodeList send TempDelegate( u ) to self . Scheideler and A. Setzer and T. Strothmann 25 Listing 5
Search* protocol
InitiateNewSearch( destID ) if ( self.mode = staying )// See Algorithm ~ .else// do nothing . ForwardProbe( source, destID, Next, seq ) if ( destID = id ( self ) )if ( self.mode = staying )// See Algorithm ~ .elsesend ProbeFail( destID, seq ) to source for all u ∈ Next send
TempDelegate( u ) to self send TempDelegate( source ) to self elseif ( destID > id ( self ) ) Next ← Next \ { self } ∪ { w ∈ Right | id ( w ) ≤ destID } if ( Next = ∅ )send ProbeFail( destID, seq ) to source send TempDelegate( source ) to self else u ← argmin { id ( u ) | u ∈ Next } if ( id ( u ) < id ( self ) )send TempDelegate( u ) to self else if ( id ( u ) < id ( argmin { id ( v ) | v ∈ Right } ) )if { self.mode = staying } Right ← Right ∪ { u } else T emp R ← T emp R ∪ { u } send ForwardProbe( source, destID, Next, seq ) to u if ( destID < id ( self ) )// Analogous to the previous case . ProbeSuccess( destID, seq, dest ) if ( self.mode = staying )// See Algorithm .elsesend TempDelegate( dest ) to self ProbeFail( destID, seq ) if ( self.mode = staying )// See Algorithm . In the following sections we will show that (i)
Build-List* is a self-stabilizing solutionto the
FDP , (ii)
Build-List* is a self-stabilizing solution to the linearization problem and(iii)
Build-List* admissible-message satisfies non-trivial monotonic searchability accordingto
Search* . F DP
This section is dedicated to prove the following theorem. (cid:73)
Theorem 22.
Build-List* is a self-stabilizing solution to the
FDP . First of all, we prove the safety property. Let
P N G be the subgraph of
N G , whose nodesare all present nodes. (cid:73)
Lemma 23.
If a computation of
Build-List* starts in a state in which
P N G is weaklyconnected,
P N G remains weakly connected in every state of this computation.
Proof.
Note that the result of Lemma 4 still holds for the actions
Timeout , Introduce() , Linearize() and
TempDelegate() in case the executing node is staying. Furthermore,the result directly transfers to
Introduce() , Linearize() and
TempDelegate() if theexecuting node is leaving, since the only change is that references are stored in
T emp L and T emp R instead of Lef t and
Right . The same is true for the actions of
Search* : ForwardProbe() , ProbeSuccess() and
ProbeFail() . Moreover, a leaving node executingthe
Timeout action can only endanger weak connectivity, if it executes exit . However, inthat situation
N IDEC is true for the node and it introduces all neighbors to each otherbefore calling the exit command. Hence, weak connectivity is also nevertheless preserved forall present nodes.For the three new actions of
Build-List* we note that the only action that activelydeletes a reference is
ReverseAndLinearize() . However, if that happens, an
Introduce() message containing the own reference is sent to the deleted node. Thus an explicit edge ( a, b )is replaced by an implicit edge ( b, a ) (i.e., the edge is reversed ) and weak connectivity ispreserved. (cid:74)
Second, we prove the
Liveness property: (cid:73)
Lemma 24.
For any computation of
Build-List* there exists a computation suffix inwhich all leaving nodes are gone.
Proof.
Assume for contradiction there is a computation C of Build-List* for which theredoes not exist a computation suffix in which all leaving nodes are gone. Let CS be thesuffix of C in which (i) all nodes that will ever decide to be leaving have done so and (ii) allleaving nodes that will execute exit are gone. Since the node set is finite such a suffix has toexist. Let s be the first state of CS .Let CS be the suffix in which all Introduce() , Linearize() , TempDelegate() , Re-verseAndLinearizeREQ() , ReverseAndLinearizeACK() , ReverseAndLinearize() , ProbeSuccess() , ProbeFail() and
ForwardProbe() , messages that were in the incom-ing channel of any node in state s have been received and all ReverseAndLinearize() messages sent in response to a
ReverseAndLinearizeACK() in s have also been received.Note that for all states in CS it holds that holds that dest is staying, since leaving nodesanswer every ForwardProbe() with a
ProbeFail() . Additionally, leaving nodes do notsend
ForwardProbe() messages in CS so the number of ForwardProbe() messagein CS for which the source is leaving is upper bounded. In fact, for CS it holds thatany ForwardProbe() message has been received at least once. Therefore, a node cannotbe added twice to the
N ext field of a message since
ForwardProbe() messages are onlyforwarded into one direction according to the protocol, i.e., a
ForwardProbe() will visitonly nodes with increasing id or only with decreasing ids. Therefore, each
ForwardProbe() can only be forwarded finitely many often and is thereby answered by
ProbeSuccess() or ProbeFail() eventually. Consequently, there is also a state (and thereby a computationsuffix CS ), in which all ForwardProbe() message which have a leaving node as the source are answered by their
ProbeSuccess() or ProbeFail() and also these
ProbeSuccess() or ProbeFail() messages in the incoming channel of a leaving node have been received. . Scheideler and A. Setzer and T. Strothmann 27
Note that in every state of CS , every message that is in x.Ch has been sent in CS . Wecall the node that adds a message into the incoming channel the sender of the message. Bythe definition of CS , the following invariants hold (which is easy to check, according to theprotocol): If ForwardProbe( source, destID, N ext, seq ) message is in x.Ch and id ( source )
TempDelegate( y ) message in x.Ch and id ( x ) < id ( y ), then for the sender z , id ( z ) < id ( x ) < id ( y ) or z = x . If there is an
Introduce( y, z ) message in x.Ch with z = bot and id ( x ) < id ( y ), then z isthe sender and when z sent the Introduce( y, z ) message, y ∈ Right ( w ) (and vice-versa). If there is an
Introduce( y, ⊥ ) message in x.Ch then either y is also the sender and y isnot leaving (since otherwise y would execute exit after sending the message contradictingthe definition of C ) or the sender z = y is staying and sent the message as an answer toan Linearize( y ) message. If there is a
Linearize( y ) message in x.Ch and id ( x ) < id ( y ), then in the state in whichthe sender z sent the Linearize( y ) message, it must have done so in response to an Introduce( y, x ) it received. If there is a
ReverseAndLinearizeACK(y,uniqueValue) message in x.Ch , then y is the sender, and id ( y ) < id ( x ) and in the state in which y sent the message, x ∈ Right ( y ) ∪ T emp R ( y ) (or id ( x ) < id ( y ) and y is staying). If there is a
ReverseAndLinearize( nodeList, uniqueV alue ) in x.Ch then the sender z must be leaving and for every y ∈ N odeList , it holds that id ( y ) > id ( z ). Additionally, themessage is a response due to a ReverseAndLinearizeACK(x,uniqueValue) messagereceived by z .In order to prove the desired statement, we first show two additional lemmas before continuingwith the proof. (cid:73) Lemma 25.
Consider a state s of CS and let u be a staying node and v be a leavingnode with id ( u ) < id ( v ) . If it holds in s that (i) there is no edge ( u , v ) ∈ N G with id ( u ) < id ( u ) , and (ii) for any leaving node v with id ( u ) < id ( v ) < id ( v ) there will neverbe an edge ( u, v ) ∈ N G in a subsequent state, then there is a state s in CS such that forthe computation suffix CS starting in s it holds that ( u, v ) / ∈ N G for every state in CS . Proof.
Since there is no edge ( u , v ) ∈ N G with id ( u ) < id ( u ), no node to the left of u can add a message to u.Ch that contains the reference of v . Additionally, since id ( u ) ReverseAndLinearize() message to sent u by a leaving node v with id ( u ) < id ( v ) < id ( v ) can only be sent as a response to a ReverseAndLinearizeACK(u,uniqueValue) by v (see Invariant 7), which cannot happensince there will never be an edge ( u, v ) ∈ N G . Note that we only consider states in CS ,therefore the above mentioned invariants hold.At first assume that no edge ( u, v ) exists. If never gets a reference to v in CS the lemmaholds trivially. Consequently, u can only get the reference of v in an Introduce( v, ⊥ ) or in a Linearize( v ) message. In the first case, the Introduce( v, ⊥ ) was sent by anode w = v as a response to a former Linearize( v ) message, according to Invariant 4. According to the pseudocode of Linearize() , this can only happen if id ( v ) > id ( u ) > id ( w )or id ( v ) < id ( u ) < id ( w ). Both cases cannot happen since id ( u ) < id ( v ) and no node tothe left of u can add a message to u.Ch . So in this scenario, the lemma holds as well.In the second case, the Linearize( v ) message u will send a TempDelegate( v ) it itself.Consequently, there is a state in CS in which an edge ( u, v ) exists, which is handled in thefollowing.Now consider the case that an edge ( u, v ) exists. Note that ( u, v ) can be a multi-edgeand be explicit as well as implicit. In fact, it can be both and if it is implicit it can be dueto multiple messages in u.Ch . At first we show that all messages in u.Ch that contain areference to v , will be made explicit or vanish completely.If there is an Introduce( v, ⊥ ) message in u.Ch then u will send a TempDelegate( v ) message to itself upon receipt.There can be no Introduce( v, w ) for some node w in u.Ch , since (i) if id ( w ) < id ( u ),then due to Invariant 3 v ∈ Right ( w ) which contradicts the choice of u and (ii) if id ( w ) > id ( u ) then according to the pseudocode w can only send Introduce( u, w ) message to v and not vice-versa.If there is a Linearize( v ) message in u.Ch , then u will either convert it into a TempDelegate( v ) message to itself or delete a previously saved reference to v and sendan TempDelegate( v ) to a node with a higher id.If there is a TempDelegate( v ) message in u.Ch , u either saves the reference (therebydeleting the implicit edge) or sends a TempDelegate( v ) to a node to a node with ahigher id.If there is ReverseAndLinearize() message in u.Ch (i.e., v ∈ nodeList ), then u eithersaves the reference or sends a TempDelegate( v ) to itself.There can be no ReverseAndLinearizeACK() message in u.Ch (since u is staying).Consider the case in which ( u, v ) is explicit. If there is no node x ∈ Right ( u ) with id ( u ) Consider a state s of CS and let u and v be leaving nodes with id ( u ) < id ( v ) .If it holds in s that (i) there is no edge ( u , u ) ∈ N G with id ( u ) < id ( u ) , (ii) for any leavingnode v with id ( u ) < id ( v ) < id ( v ) there will never be an edge ( u, v ) ∈ N G in a subsequentstate, and (iii) there exists a ( w, u ) ∈ N G with w leaving and id ( u ) < id ( w ) , then thereis a state s in CS such that for the computation suffix CS starting in s it holds that ( u, v ) / ∈ N G for every state in CS . . Scheideler and A. Setzer and T. Strothmann 29 Proof. Since there is no edge ( u , u ) ∈ N G with id ( u ) < id ( u ), no node to the left of u canadd a message to u.Ch . Additionally, since id ( u ) < id ( v ) no node x to the right of u canadd a TempDelegate( v ) or Introduce( v, x ) message to u.Ch , according to the protocol.Furthermore, no node to the right of u can send a Linearize( v ) to u , since the message hasto be a response to a former Introduce( v, u ) message by u (according to Invariant 5), which u does not send. Moreover, no node to the right of u can send an Introduce( v, ⊥ ) , since itis has to be sent by a node w = v as a response to a former Linearize( v ) message, accordingto Invariant 4. This can only happen if id ( v ) > id ( u ) > id ( w ) or id ( v ) < id ( u ) < id ( w ) (i.e.,it never happens). Finally, no leaving node to the right can add a message to u.Ch thatcontains the reference of v , because for any leaving node v with id ( u ) < id ( v ) < id ( v ) therewill never be an edge ( u, v ) ∈ N G and Invariant 7. Note that we only consider states in CS , therefore the above mentioned invariants hold.At first assume that no edge ( u, v ) exists. Analogous to the same situation in Lemma 25,one can show that statement of the lemma is true.In case ( u, v ) exists, ( u, v ) can be a multi-edge and be explicit as well as implicit. At firstconsider all implicit edges ( u, v ).If there is an Introduce( v, ⊥ ) message or Introduce( v, x ) message in u.Ch for somenode x , then u will save the reference of v .In case there is a TempDelegate( v ) message in u.Ch , u either saves the reference(thereby deleting the implicit edge) or sends an TempDelegate( v ) to a node with ahigher id.If there is a ReverseAndLinearize() message in u.Ch , it cannot contain the referenceof v , since for the leaving sender id ( u ) < id ( sender ) < id ( v ) has to hold (contradictingthe choice of ( u, v ) and the fact that for any leaving node v with id ( u ) < id ( v ) < id ( v )there will never be an edge ( u, v ) ∈ N G ).There can be no ReverseAndLinearizeACK(v,uniqueValue) message in u.Ch thatcontains v (since id ( u ) < id ( v ) and Invariant 6).Therefore eventually, ( u, v ) is only an explicit edge. Due to our choice of u, v, w in thestatement the node w eventually sends a ReverseAndLinearizeREQ(right) to u and u responds with a ReverseAndLinearize( u, uniqueV alue ) to v . Node v will receive said mes-sage, save u in its local memory and send a ReverseAndLinearize( nodeList, uniqueV alue ) back to u . Consequently, u deletes its reference to v and saves the nodeList instead. Note thatany further ReverseAndLinearize( nodeList, uniqueV alue ) message from v do not createan edge ( u, v ), since u has no node x in its local memory with uniqueV alue [ x ] = uniqueV alue ,so it only saves the nodeList itself. Thus, there is a s in CS such that for the computationsuffix CS starting in s it holds that ( u, v ) / ∈ N G for every state in CS . (cid:74) With these two lemmas in place, we can focus on the main statement. Note that since CS is a computation suffix of C , by our initial assumption there exists at least one presentleaving node in CS . Consider the set L of present leaving nodes x with the property thatthroughout CS there does not exist a leaving node y with id ( y ) > id ( x ) with x ∈ Lef t ( y )or x ∈ T emp L ( y ). Furthermore, let u ∗ be the node with minimum id in L . Such a node mustalways exist since due to Lemma 2, the present leaving node with highest id is always in L .We will show a contradiction to our initial assumption by proving that the node u ∗ canexecut e exit eventually. In order to do so consider the following lemma. (cid:73) Lemma 27. There is a computation suffix CS ∗ of CS such that no edge ( u, u ∗ ) with id ( u ) < id ( u ∗ ) exists in CS ∗ . Proof. We will prove the statement by induction over all leaving nodes v with id ( v ) ≤ id ( u ∗ ).For the sake of simplicity we address those nodes by v , v , . . . v k = u ∗ with id ( v i ) < id ( v i +1 )For the induction base consider the leaving node with lowest id v . Let w , . . . , w m with id ( w i ) < id ( w i +1 ) be all nodes with a lower id than v . By definition all w i nodes are staying.Due to the definition of v and w Lemma 25 is applicable (in fact part (ii) of the if-statementis irrelevant) and there is a suffix such that ( w , v ) will cease to exist forever. Consequently,Lemma 25 is applicable to w and we can continue this approach until we have a suffix suchthat no edge ( u, v ) with id ( u ) < id ( v ) exists in that suffix.For the induction step assume that the statement holds for some leaving node v i . Similarto the induction base let w , . . . , w ‘ be all nodes with a lower id than v i and let w ‘ +1 , . . . , w m be all nodes with an id bigger than v i but smaller than v i +1 (with id ( w i ) < id ( w i +1 )). Atfirst consider all w i ∈ { w , . . . , w ‘ } in increasing order. In case the currently considered node w i is staying, we can apply Lemma 25 to show that there is a suffix such that all nodes withan id lower than w i will never have an edge to v i +1 . In case the currently considered w i isleaving we can apply Lemma 26 to get the same outcome. Now consider v i , by the inductionhypothesis, we know that we can also apply Lemma 26. For all w i ∈ { w ‘ +1 , . . . , w m } weknow that they are staying, i.e., Lemma 25 is applicable again. Therefore the induction stepis complete which proves the statement. (cid:74) Aisde from this, we can show that there is also a computation suffix in which there existsno edge ( u, u ∗ ) with id ( u ) > id ( u ∗ ). We can do so by an argument analogous to Lemma 25(only that in this case the staying nodes has a higher id) and due to the choice of u ∗ (i.e.,throughout the computation suffix CS only staying nodes with higher id have an edge to u ∗ ).Consequently, there exists a state in CS ∗ (and thereby also in C ) such that for all nodes u no edge ( u, u ∗ ) exists. Therefore, u ∗ cannot receive any messages anymore and once itschannel is empty, N IDEC evaluates to true (i.e., it executes exit ). This is a contradictionto the choice of C . (cid:74) Here, we show the following theorem. (cid:73) Theorem 28. Build-List* is a self-stabilizing solution to the linearization problem. Proof. Note that by Lemma 24, in every computation of Build-List* there is a suffix inwhich all leaving nodes are gone. Note that starting from this state, Build-List* actsexactly as Build-List+ . By Lemma 23, N G is still weakly connected in this state. Thus,the properties of Theorem 3 are fulfilled, yielding that Build-List* is a solution to thelinearization problem as well. (cid:74) Finally, we prove the following thereom concerning monotonic searchability. (cid:73) Theorem 29. Build-List* admissible-message satisfies non-trivial monotonic searchabilityaccording to Search* . In general, the proof follows the structure of the results from Subsection 3.3. However,since we want to satisfy monotonic searchability even under the presence of leaving nodes, . Scheideler and A. Setzer and T. Strothmann 31 the proof is more involved. First we define R s ( v ) as the set of all staying nodes x with id ( v ) < id ( x ) for which there is a directed path from v to x consisting solely of explicitedges ( y, z ) with id ( y ) < id ( z ) that arise from z ∈ Right ( y ). Furthermore, we define R s ( v, w ) := { x ∈ R s ( v ) | id ( x ) ≤ id ( w ) } . In addition, we define L s ( v ) as the set of all stayingnodes x with id ( x ) < id ( v ) for which there is a directed path from v to x consisting solelyof explicit edges ( y, z ) with id ( z ) < id ( y ) that arise from z ∈ Lef t ( y ). For a set of nodes U ,we define R s ( U ) := U ∪ S u ∈ U R s ( u ) and L s ( U ) := U ∪ S u ∈ U L s ( u ). Additionally, we define R s ( U, ID ) := { x ∈ R s ( U ) | id ( x ) ≤ ID } , and L s ( U, ID ) := { x ∈ L s ( U ) | id ( x ) ≥ ID } Last,we have R + s ( u ) := R s ( u ) if u is leaving, or R + s := R s ( u ) ∪ { u } if u is staying (with L + s ( u )defined analogously).Moreover, we define the following message invariants: If there is an Introduce( v, w ) message with w = ⊥ in u.Ch , then v = w , and R + s ( u ) ⊆ R s ( w ) (or L + s ( u ) ⊆ L s ( w )). If there is a Linearize( v ) message in w.Ch , then there is a node u = v with u ∈ Right ( w )and R + s ( v ) ⊆ R s ( u ) if w < v (or u ∈ Lef t ( w ) and L + s ( v ) ⊆ L s ( u ) if v < w ). If there is a ReverseAndLinearizeACK(v,uniqueValue) message in u.Ch , then u = v and u.uniqueV alues [ v ] = uniqueV alue and v is the only node with u.uniqueV alues [ v ] = uniqueV alue . If there is a ReverseAndLinearize( nodeList, uniqueV alue ) message in u.Ch , thenthere is exactly one node v with u.uniqueV alues [ v ] = uniqueV alue . Furthermore, v isleaving, and R s ( v ) = R s ( nodeList ) if u < v (or L s ( v ) = L s ( nodeList ) if v < u ). If there is a ForwardProbe( source, destID, N ext, seq ) message in u.Ch , then a. id ( source ) < destID and ∀ x ∈ N ext : id ( x ) ≥ id ( u ) and u = argmin u { id ( u ) | u ∈ N ext } (alternatively destID < id ( source ) and ∀ x ∈ N ext : id ( x ) ≤ id ( u ) and u = argmax u { id ( u ) | u ∈ N ext } ). b. id ( source ) < destID and R s ( N ext ) ⊆ R s ( source ) (or destID < id ( source ) and L + s ( u ) ⊆ L ( source )). c. if v exists with id ( v ) = destID and v is staying, such that id ( source ) < destID ,and v / ∈ R s ( N ext, destID ) (or id ( source ) < destID and v / ∈ L s ( N ext, destID )) thenfor every admissible state with source.seq [ destID ] < seq , v / ∈ R s ( source, destID )( v / ∈ L s ( source, destID )). If there is a ProbeSuccess( destID, seq, dest ) message in u.Ch , then id ( dest ) = destID and dest ∈ R s ( u ) if destID > id ( u ) (or dest ∈ L s ( u ) if destID < id ( u )), or dest isleaving. If there is a ProbeFail( destID, seq ) message in u.Ch , then either there is no stayingnode with id destID , or for every admissible state with u.seq [ destID ] < seq , v / ∈ R s ( u )(and v / ∈ L s ( u )), where v is the node with id ( v ) = destID . If there is a Search( v, destID ) message in u.Ch and u is staying, then id ( u ) = destID and u ∈ R s ( v ) if id ( v ) < destID (or u ∈ L s ( v ) if destID < id ( v )).A state is therefore admissible if all four invariants hold. As in Section 3.3, we can prove: (cid:73) Lemma 30. If in a computation of Build-List* , there is an admissible state, then allsubsequent states will be admissible as well. The general structure of the proof is similar to the proof of Lemma13, although the detailsare different as we have to take into account that nodes can become leaving and due to theadditional message invariants.First, we show the following: (cid:73) Lemma 31. If in a computation of Build-List* , there is a state in which Invariants 1-4hold, then in all subsequent states Invariants 1-4 will hold. Proof. Assume there is a state s in which Invariant 1-4 hold, such that in the (direct)subsequent state s one of the Invariants 1-4 does not hold. First of all, check that none ofthe first four invariants can be invalidated because some node becomes leaving. Secondly,note that the first four invariants cannot become falsified due to a new Introduce( v, w ) or Linearize( v ) message for very similar reasons as in the proof of Lemma 14 (since in this part Build-List+ and Build-List* are exactly the same). Furthermore, note that accordingto the protocol when a node w sends a ReverseAndLinearizeACK(v,uniqueValue) toa node u , then w = v and it makes sure that uniqueV alue is stored in v.uniquevalues [ u ](and we assume that uniqueV alue is only stored for u ). Thus, sending such a message alsocannot invalidate one of the first four invariants. Moreover, note that when a node v sends a ReverseAndLinearize( nodeList, uniqueV alue ) message to a node u with u < v betweenstate s and s , then v must have received a ReverseAndLinearizeACK(u,uniqueValue) message right before and v must be leaving. Since Invariant 3 holds in s , this means that u.uniqueV alues [ v ] = uniqueV alue and v is the only node such that u.uniqueV alues [ v ] = uniqueV alue . In addition, when sending the message, v added all nodes from Right ( u )to nodeList . Thus, in state s , R s ( v ) = R s ( nodeList ) holds and v is the only node with uniqueV alues [ v ] = uniqueV alue . If v < u , L s ( v ) = L s ( nodeList ) holds, for analogousarguments. Besides, note that the R s ( v ) = R s ( nodeList ) part of Invariant 4 for a node v cannot be invalidated due to the addition of any node to the set Right ( v ) (or Lef t ( v ))because v is leaving and a leaving node never adds a member to Right (or Lef t ). Any otheraddition of a node to a set Right ( x ) (or Lef t ( x )) for another node x adds this node to R s ( v )and R s ( nodeList ) at the same time or not at all.Thus, the only event that can invalidate one of the first four invariants is the removal of anode y from a set Right ( x ) or Lef t ( x ) for a node x . This may only happen in a Linearize( y ) action for a staying node or a ReverseAndLinearize( nodeList, uniqueV alue ) action. Wewill consider both actions invidivdually.First of all, assume a Linearize( y ) action has been executed in a staying node w between s and s and thus removed a node y from Right ( w ) (or Lef t ( w )). This can onlyhappen if there was a Linearize( y ) message in w.Ch in s for which, by definition of s ,Invariant 2 holds. Thus, there is a node u = y with u ∈ Right ( w ) and R + s ( y ) ⊆ R s ( u ) (or u ∈ Lef t ( w ) and R + s ( y ) ⊆ L s ( v )), implying that after the removal of ( w, y ), R + s ( u ) ⊆ R s ( w )( L + s ( u ) ⊆ L s ( w )) still holds, i.e., there is no node x for which a node has been removed from R s ( x ) and the first four invariants cannot be invalidated due to the change.Now assume that a ReverseAndLinearize( nodeList, uniqueV alue ) action has beenexecuted in a node u between state s and s . In this case, the corresponding message musthave been in u.Ch in s . Since in s the first four invariants hold, by the fourth invariant,there must be exactly one node v that is leaving with u.uniqueV alues [ v ] = uniqueV alue ,and R s ( v ) = R s ( nodeList ) if u < v (or L s ( v ) = L s ( nodeList ), otherwise). W.l.o.g. assumethat u < v (note that in case v < u and u leaving, no node is removed from or added to Lef t ( u ) at all, but in this case, the invariant still holds, which is what we want to proveanyway). If v / ∈ Right ( x ), no node is removed from or added to Right ( x ) at all and theclaim follows immediately. Thus, assume v ∈ Right ( x ). In this case, u removes v from Right ( u ) and adds nodeList to Right ( u ). Since R s ( v ) = R s ( nodeList ) and R s ( v ) ⊆ R s ( u ),and v / ∈ R s ( v ) (because v is leaving), no node has been removed from or added to R s ( u )after the action has been performed, implying that all four invariants still hold. (cid:74) . Scheideler and A. Setzer and T. Strothmann 33 Similar to Lemma 15, one can show the following: (cid:73) Lemma 32. If there is a state in which the first four invariants hold, and R + s ( x ) ⊆ R s ( v ) ( L + s ( x ) ⊆ L x ( v ) ), then in every subsequent step, R + s ( x ) ⊆ R s ( v ) ( L + s ( x ) ⊆ L s ( v ) ). Proof. Assume there is a state s such that R + s ( x ) ⊆ R s ( v ) holds, but in the (direct)subsequent state s , R + s ( x ) ⊆ R s ( v ) does not hold. We consider all possible reasons for why R + s ( x ) ⊆ R s ( v ) does not hold in s . Obviously, neither the addition of a node to R s ( v ) northe removal of a node from R + s ( x ) can violate the claim. Note that if a node z is addedto R + s ( x ), this happens because a node y ∈ R + s ( x ) added z to Right ( x ). However, since y ∈ R s ( v ), z is also added to R s ( v ) (by definition of this set). This yields that the onlyreason for the claim to be incorrect in s is that a (staying) node z ∈ R + s ( x ) was removedfrom R s ( v ) but not from R + s ( x ). We consider all possible cases for this.First, assume z was removed from R s ( v ) because z became leaving. Then z was alsoremoved from R + s ( x ).Secondly, assume that z was removed from R s ( v ) due to a Linearize( y ) action at anode w ∈ R s ( v ) ∪ { v } between s and s . Then, by the second invariant, there was a node u = y with u ∈ Right ( w ) and R + s ( y ) ⊆ ( u ) in s . Thus, after y is removed from Right ( w ), R + s ( y ) ∈ R s ( w ) still holds, implying R + s ( y ) ⊆ R s ( v ), i.e., neither y nor any other node z in R s ( y ) was removed from R s ( v ).Thirdly, assume a staying node z ∈ R + s ( x ) was removed from R s ( v ) but not from R + s ( x ) due to a ReverseAndLinearize( nodeList, uniqueID ) action in a node u , removingnode y from Right ( u ). In this case, according to Invariant 4, y is the unique node with u.uniqueV Alues [ y ] = uniqueV alue , y is leaving, and R s ( y ) = R s ( nodeList ). Thus, when y is removed from Right ( u ) and N odeList is added to Right ( u ), no node is removed from R s ( u ), implying that no node is removed from R s ( v ).Thus, the claim holds in every case. Note that the argument for L + s ( x ) ⊆ L x ( v ) iscompletely analogous. (cid:74) Using this, we can prove the following lemmata: (cid:73) Lemma 33. If in a computation of Build-List* , there is a state in which Invariants 1-5hold, then in all subsequent states Invariants 1-5 will hold. Proof. Assume there is a state s in which Invariant 1-5 hold, such that in the (direct)subsequent state s one of the first five invariants does not hold. By Lemma 31, this canonly be Invariant 5. Note that Invariant 5a) is equal to Invariant 3a) from Section 3.3. Thus,Invariant 5a) cannot be violated for the same reasons mentioned in the proof of Lemma 16.Note that if Invariant 5b) and 5c) hold for a ForwardProbe( source, destID, N ext, seq ) message when this message is sent, they also do so when the message is delivered becauseof Lemma 32. Thus, the only reason why Invariant 5b) or 5c) do not hold in s is thata new ForwardProbe( source, destID, N ext, seq ) message has been sent. There may betwo reasons for this: Either because a node u executed Timeout , or because a node u received another ForwardProbe( source, destID, N ext , seq ) message. We consider bothcases individually (each time, for id ( source ) < destID because the other case is analogous).In the first case, the ForwardProbe( source, destID, N ext, seq ) message is sent to u itself, with u = source and N ext = { u } , which is why Invariant 5b) holds. Also note thatsince u.seq [ destID ] is monotonically increasing, and seq = source.seq [ destID ] in this state,if there was an admissible state with source.seq [ destID ] < seq with v ∈ R s ( source, destID ),then this must have been a previous state. Note that v ∈ R s ( source, destID ) implies R + s ( v ) ⊆ R s ( source ). By Lemma 32, R + s ( v ) ⊆ R s ( source ) must still hold in s , which, if v is staying, implies v ∈ R s ( source, destID ). Thus, Invariant 5c) still holds in this case.In the second case, Invariant 5 held for the ForwardProbe( source, destID, N ext , seq ) message u received. Note that u only sends the ForwardProbe( source, destID, N ext, seq ) message if id ( u ) = destID . Thus, if there is a v such that id ( v ) = destID then u = v andsince R s ( N ext, destID ) and R s ( N ext , destID ) only differ in u (since N ext = N ext \ { u } ∪ Right ( u )), Invariant 5c) also holds for the new message. Notice that the new message is sent toa node w ∈ Right ( u ) or w ∈ N ext , i.e., w ∈ R s ( N ext ) in any case. R s ( N ext ) ⊆ R s ( source )implies R s ( N ext ) ⊆ R s ( source ) ( N ext = N ext \ { u } ∪ Right ( u )), yielding the claim ofInvariant 5b) for the new message.All in all, Invariant 5 has to hold in s , too, proving the claim. (cid:74)(cid:73) Lemma 34. If in a computation of Build-List* , there is a state in which Invariants 1-7hold, then in all subsequent states Invariants 1-7 will hold. Proof. Again, assume there is a state s in which Invariant 1-7 hold, such that in the (direct)subsequent state s one of the first seven invariants does not hold. By Lemma 33, this can onlybe Invariant 6 or Invariant 7. Observe that Invariant 6 and Invariant 7 can only be violatedif there is a node v with id ( v ) = destID and v is staying. Again, by Lemma 32 and because R + s ( x ) ⊆ R s ( y ) is equivalent to x ∈ R s ( y ) if x is staying, any of the two invariants can onlybe violated if a new ProbeSuccess( destID, seq, dest ) or a new ProbeFail( destID, seq ) was sent by a node w between s and s . We consider both cases individually.Assume a new ProbeSuccess( destID, seq, dest ) message has been sent by w to anode u . According to the protocol, this only happens in a ForwardProbe() action,when a ForwardProbe( source, destID, N ext, seq ) message has arrived at w = dest with id ( w ) = destID and u = source . As stated before, w must be staying. Thus, Invariant 5 b)implies dest ∈ R s ( u ).For the ProbeFail() messages, assume a node w sends a ProbeFail( destID, seq ) message to a node u . According to the protocol, this only happens in a ForwardProbe() action, when a ForwardProbe( source, destID, N ext, seq ) message has arrived at w with id ( w ) = destID , u = source and N ext = { w } and there is no y in Right ( x ) with id ( y ) ≤ destID . If no staying node with id destID exists, we are done. Otherwise, we have that forthis node v , v / ∈ R ( N ext, w ). By Invariant 3c), this implies the claim.Thus, Invariant 6 and Invariant 7 have to hold in s , too, proving the claim. (cid:74) Now we can finally prove Lemma 30: Proof. Assume there is an admissible state s , such that the (direct) subsequent state s isnot admissible. By Lemma 34, only Invariant 8 can be violated in s . However, by a similarargument as in the proof of Lemma 13, this is not possible. (cid:74) The following also holds: (cid:73) Lemma 35. In every computation of Build-List* there is an admissible state. Proof. Note that according to Lemma 24, every computation of Build-List* has is a suffix inwhich nodes that will eventually be leaving are gone and note that these nodes do not performany actions. Furthermore, note that a ReverseAndLinearize( nodeList, uniqueV alue ) message is only sent if a node received a ReverseAndLinearizeACK(v,uniqueValue) message. Moreover, a ReverseAndLinearizeACK(v,uniqueValue) message can only besent if a node receives a ReverseAndLinearizeREQ(DIR) message. Such a message, can . Scheideler and A. Setzer and T. Strothmann 35 only be sent from a leaving node. However, in the aforementioned suffix, no leaving node cansend a message any more. Thus, there is a suffix, in which the third and the fourth invariantalways hold.Note that by Theorem 28, the remaining nodes will converge to the list. In this state,similar to the argument used in the proof of Lemma 18, no new Introduce( v, w ) messageswith v = w and no new Linearize( u ) messages can be initiated, i.e., the first two invariantsalways hold. Note that since R + s ( v ) ⊆ R s ( u ) is equivalent to v ∈ R ( u ) if v is staying, andthe system only consists of staying nodes in the current suffix, Invariant 5-8 are equivalentto Invariant 3-6 in Section 3.3. Thus, the rest of the proof is analogous to the proof ofLemma 18. (cid:74) Note that Lemma 30 and Lemma 35 imply the following corollary: (cid:73) Corollary 36. In every computation of Build-List* , there exists a suffix in which everystate is admissible. For the rest of this subsection, we assume that every computation starts in an admissiblestate. This is due to the fact that monotonic searchability must hold starting from admissiblestates only. Furthermore, w.l.o.g. we only consider requests Search( u, destID ) with id ( u ) < destID .As in Section 3.3, we need some additional results before we can prove Theorem 29. (cid:73) Lemma 37. Assume for a ForwardProbe( v, destID, N ext, seq ) message m ∈ x.Ch ,there is a u ∈ R s ( N ext, destID ) . Then either u = x or there will be a state in which a ForwardProbe( v, destID, N ext , seq ) message is in y.Ch for some node y with id ( y ) >id ( x ) and u ∈ R s ( N ext , destID ) , or u is leaving. Proof. Assume u = x . Note that when m is received by x , a new message with N ext = N ext \ { x } ∪ Right ( x ) will be sent. According to the fifth invariant, for all nodes z in N ext , id ( z ) > id ( y ) holds, and x is the node with minimum id among all nodes in N ext . By Lemma 2, the same holds for the nodes z in Right ( x ). Thus, x is the nodewith minimum id among all ones in R ( N ext, w ) and for the node y to which a new ForwardProbe( v, destID, N ext , seq ) message is sent it holds id ( y ) > id ( x ). Further-more, R s ( N ext ( x ) , destID ) \ { x } ⊆ R s ( N ext , destID ) implying u ∈ R s ( N ext , destID )unless u has become leaving. (cid:74) This allows us to prove the following lemma: (cid:73) Lemma 38. For every message m = ForwardProbe( v, destID, N ext, seq ) ∈ u.Ch with id ( u ) < destID , it holds that if there is a staying node w with id ( w ) = destID in the networkand w ∈ R s ( u ) , then eventually there will be a ForwardProbe( v, destID, N ext ) messagein w.Ch , or w will be leaving. Proof. Note that when m arrives as u , N ext will be changed such that R s ( u, w ) ⊆ R s ( N ext, w ). If w ∈ R s ( u ), then w ∈ R s ( N ext, w ) afterwards. Thus, by applying Lemma 37recursively, we have that eventually a ForwardProbe( v, destID, N ext , seq ) will be in w.Ch , which will be received according to the fair message receipt assumption, unless w becomes leaving. (cid:74) Using these results, the proof of Theorem 29 is analogous to the proof of Theorem 12(substituting R ( v ) by R s ( v ), noting that R + s ( v ) ⊆ R s ( u ) is equivalent to v ∈ R ( u ) if v isstaying, and using Lemma 32 instead of Lemma 15, and Lemma 38 instead of Lemma 20).Note that as soon as a node becomes leaving, searchability to this node does not need besatisfied any longer. To the best of our knowledge, we presented the first protocol that self-stabilizes a topologywhilst satisfying monotonic searchability. We focused on the line topology as a starting pointand extended our protocol such that it additionally solves the Finite Departure Problem. Inthe design of our protocol, it turned out that the principle of delegating explicit edges only ifthey have been successfully introduced before is crucial to enable monotonic searchability. Anatural open question is whether the application of this principle is sufficient for monotonicsearchability. That is, does applying this principle to other protocols that stabilize a topology(e.g., rings, skip-graphs, Delaunay graphs) directly yield monotonic searchability, or do othertopologies require more-specialized solutions? References James Aspnes and Yinghua Wu. O(logn)-time overlay network construction from graphswith out-degree 1. In Principles of Distributed Systems, 11th International Conference,OPODIS 2007, Guadeloupe, French West Indies, December 17-20, 2007. Proceedings , pages286–300, 2007. Andrew Berns, Sukumar Ghosh, and Sriram V. Pemmaraju. Building self-stabilizing overlay networks with the transitive closure framework. Theor. Comput. Sci. , 512:2–14, 2013. Tushar Deepak Chandra and Sam Toueg. Unreliable failure detectors for reliable distributedsystems. J. ACM , 43(2):225–267, 1996. Edsger W. Dijkstra. Self-stabilizing systems in spite of distributed control. Commun. ACM ,17(11):643–644, 1974. Shlomi Dolev and Ronen I. Kat. Hypertree for self-stabilizing peer-to-peer systems. Dis-tributed Computing , 20(5):375–388, 2008. Shlomi Dolev and Nir Tzachar. Spanders: Distributed spanning expanders. Sci. Comput.Program. , 78(5):544–555, 2013. Dianne Foreback, Andreas Koutsopoulos, Mikhail Nesterenko, Christian Scheideler, andThim Strothmann. On stabilizing departures in overlay networks. In Stabilization, Safety,and Security of Distributed Systems - 16th International Symposium, SSS 2014, Paderborn,Germany, September 28 - October 1, 2014. Proceedings , pages 48–62, 2014. Dominik Gall, Riko Jacob, Andréa W. Richa, Christian Scheideler, Stefan Schmid, andHanjo Täubig. A note on the parallel runtime of self-stabilizing graph linearization. TheoryComput. Syst. , 55(1):110–135, 2014. Riko Jacob, Andréa W. Richa, Christian Scheideler, Stefan Schmid, and Hanjo Täubig.Skip+: A self-stabilizing skip graph. J. ACM , 61(6):36:1–36:26, 2014. Riko Jacob, Stephan Ritscher, Christian Scheideler, and Stefan Schmid. Towards higher- dimensional topological self-stabilization: A distributed algorithm for delaunay graphs. Theor. Comput. Sci. , 457:137–148, 2012. Sebastian Kniesburges, Andreas Koutsopoulos, and Christian Scheideler. A self-stabilization process for small-world networks. In , pages1261–1271, 2012. Sebastian Kniesburges, Andreas Koutsopoulos, and Christian Scheideler. Re-chord: Aself-stabilizing chord overlay network. Theory Comput. Syst. , 55(3):591–612, 2014. Andreas Koutsopoulos, Christian Scheideler, and Thim Strothmann. Towards a universalapproach for the finite departure problem in overlay networks. In Stabilization, Safety, andSecurity of Distributed Systems - 17th International Symposium, SSS 2015, Edmonton, AB,Canada, August 18-21, 2015, Proceedings , pages 201–216, 2015. . Scheideler and A. Setzer and T. Strothmann 37 Rizal Mohd Nor, Mikhail Nesterenko, and Christian Scheideler. Corona: A stabilizingdeterministic message-passing skip list. Theor. Comput. Sci. , 512:119–129, 2013. Melih Onus, Andréa W. Richa, and Christian Scheideler. Linearization: Locally self-stabilizing sorting in graphs. In Proceedings of the Nine Workshop on Algorithm Engineeringand Experiments, ALENEX 2007, New Orleans, Louisiana, USA, January 6, 2007 , 2007. Ayman Shaker and Douglas S. Reeves. Self-stabilizing structured ring topology P2P sys-tems. In Fifth IEEE International Conference on Peer-to-Peer Computing (P2P 2005), 31August - 2 September 2005, Konstanz, Germany , pages 39–46, 2005. Yukiko Yamauchi and Sébastien Tixeuil. Monotonic stabilization. In