Open Multi-Agent Systems with Variable Size: the Case of Gossiping
Charles Monnoyer de Galland, Samuel Martin, Julien M. Hendrickx
11 Open Multi-Agent Systems with Variable Size:the Case of Gossiping
Charles Monnoyer de Galland, Samuel Martin and Julien M. Hendrickx
Abstract —We consider open multi-agent systems , which aresystems subject to frequent arrivals and departures of agentswhile the process studied takes place. We study the behaviorof all-to-all pairwise gossip interactions in such open systems.Arrivals and departures of agents imply that the compositionand size of the system evolve with time, and in particular preventconvergence. We describe the expected behavior of the system byshowing that the evolution of scale-independent quantities can becharacterized exactly by a fixed size linear dynamical system. Weapply this approach to characterize the evolution of the two firstmoments (and thus also of the variance) for open systems of bothfixed and variable size. Our approach is based on the continuoustime modelling of random asynchronous events, namely gossipsteps, arrivals, departures, and replacements.
Index Terms —Open multi-agent systems, Agents and au-tonomous systems, Cooperative control, Linear systems, Markovprocesses.
I. I
NTRODUCTION
Flexibility and scalability are among the most cited anddesired features of multi-agent systems. Real-life examplesinclude flock of birds [1], ad-hoc networks of mobile-devices[2], the Internet, or even vehicle coordination [3], for whichpotential agent failures or new agent arrivals are expected tobe handled by the system. However, the classical models usedto study multi-agent systems assume that their composition,as complex as it can be, remains unchanged over time.This assumption then allows for characterizing asymptoticbehaviors of multi-agent systems, such as convergence orsynchronization.If agents arrivals and departures are sufficiently rare ascompared to the time-scale of the process that is studied bythe system, this apparent contradiction may be justified, as thesystem is expected to be able to incorporate the effect of suchevent before the next one occurs. Nevertheless, this may notbe the case anymore for large systems, as both the arrival ordeparture probability of an agent and the characteristic lengthof a process grow with the number of agents. Living systemswith birth processes or even human societies are examples ofsystems whose growth is proportional to the size. Extremeenvironments where communication within the system may
C. Monnoyer de Galland and J. M. Hendrickx are with the ICTEAMinstitute, UCLouvain, Louvain-la-Neuve, Belgium. C. Monnoyer deGalland is a FRIA fellow (F.R.S.-FNRS). S. Martin is with Universit´ede Lorraine and CNRS, CRAN, UMR 7039, 2 Avenue de la Forˆet etde Haye, 54518 Vandoeuvre-l`es-Nancy, France. This work is supportedby the “
RevealFlight ” ARC at UCLouvain, and by the
Incentive Grantfor Scientific Research (MIS) “0”Learning from Pairwise Data of theF.R.S.-FNRS. Email adresses: [email protected] , [email protected] , [email protected] . be difficult or infrequent can also lead to slow convergencerates naturally comparable to agent failure rates, and thuschallenge this assumption too. Moreover, there are systemsthat are inherently open: think of a stretch of road shared byvehicles that keep entering and leaving it.We consider here open multi-agent systems , which are sub-ject to permanent arrivals and departures of agents during theexecution of the process that is considered. These arrivals anddepartures result in new challenges for the design and analysisof such systems. An illustration is provided in Figure 1. Time -2-1.5-1-0.500.511.5 x Figure 1. Example of dynamics of an open multi-agent system of agentswith random agent replacements and pairwise average gossips. Each linecorresponds to the value held by an agent, and each red circle highlightsa replacement event. The repeated replacements prevent convergence toconsensus. See Section IV for a precise description of the system. Firstly, every arrival and departure changes the system statedimension, whose evolution becomes challenging to analyze.The state of the system itself suffers repeated potentially im-portant changes, that prevent it from asymptotically convergingto some specific state (this is clear from Figure 1). Rather, theymay approach some form of steady state behavior, which canbe characterized by some relevant quantities. As in classicalcontrol in presence of perturbations, the choice of the measuresis not neutral, and different descriptive quantities may behavein very different ways.Secondly, arrivals and departures significantly impact thedesign of decentralized algorithms. On the one hand, depar-tures often imply losses of information that could be neededdepending on the nature of the problem. On the other hand, thedesired output of the algorithm can be defined by the valuesheld by the agents presently in the system, and thus varyover time: it can then become necessary to eliminate outdatedinformation. Algorithms in open systems must thus be robustto arrivals and departures, and able to handle potentially a r X i v : . [ c s . M A ] S e p variable objectives. Such algorithms were already exploredfor instance for the MAX-consensus problem in [4] or themedian consensus problem in [5], both subject to arrivals anddepartures. Moreover, algorithms designed for open systemscannot be expected to provide “0”exact results. Fundamentallimitations on the performance of averaging algorithms in opensystems were for instance exposed in [6], [7]. Hence, it maybe preferable to maintain an approximate answer robust toperturbations rather than ensuring asymptotic convergence toan exact answer if the composition were to remain constant. A. Contribution
We focus on the analysis of open multi-agent systemssubject to random pairwise gossip interactions [8], with thegoal of developing an approach applicable to general opensystems. We consider all-to-all (possible) pairwise interactions,focusing on systems where departures and arrivals take placeat random times, see Section II for a complete definition.We analyze the system evolution in terms of “0”scale-independent quantities. Two such relevant quantities, that westudy in this work, are given by the two first expected mo-ments: the expected squared mean E ¯ x and the expected meanof squares E x of the system state x . These quantities alsoprovide the evolution of the expected variance E (cid:16) x − ¯ x (cid:17) .We show in Section III that the evolution of these quantitiescan be characterized exactly, and that they evolve accordingto an associated 2-dimensional linear system.As a first case study, we analyze in Section IV systemssubject to random replacements ( i.e. , a departure immediatelyfollowed by an arrival). In this simplified setting, the systemsize remains constant. We provide the evolution of the de-scriptors as a function of the rates at which replacements andgossip steps occur; those are directly related to the probabilityfor a random event to be a replacement. We then focus ona more general case of open systems by considering systemssubject to random arrivals and departures in Section V. Thisrequires more advanced tools to monitor both the state and thesize of the system at the same time. B. Preliminary results
Preliminary results on simplified versions of the systemsconsidered here were presented at the Allerton Conferenceon Communication Control and Computing (2016) [9]. Thoseconsidered deterministic arrivals and departures through theanalysis of systems subject to periodic replacements, and to ar-rivals without departures. The analysis was performed throughthe study of the evolution of size-independent descriptors,namely the expected mean, the expected squared mean, andthe expected variance, as a 3-dimensional linear system.Another preliminary version was presented at the Confer-ence on Decision and Control (2017) [10]. This one extendedthe results of [9] to the stochastic case, still considering fixed-size systems and growing systems where the replacementsand arrivals are probabilistic events. Moreover, those resultsincluded a convergence analysis and a different choice ofthe moments that were studied to model the evolution of thesystem with a 2-dimensional linear system. With respect to these early versions, the main new contribu-tions in this analysis include (i) the study of completely opensystems subject to independent asynchronous random arrivalsand departures leading to a variable and non-monotonoussystem size in Section V, (ii) the use of continuous timetools to model the evolution of the scale-independent quantitiesthrough asynchronous events, allowing for simpler proofs(whereas only discrete tools were considered up to now), and(iii) the derivation of bounds on the expected variance forsituations where no assumption can be made on the way theagent leaving the system is selected (see Section. IV-C).
C. Other works on open multi-agent systems
The possibility of agents joining or leaving the system hasbeen recognized in computer science, and specific architec-tures have for example been proposed to deploy large-scaleopen multi-agent systems, see e.g. THOMAS project [11].There also exist mechanisms allowing distributed computationprocesses to cope with the shut down of certain nodes or totake advantage of the arrival of new nodes. Typically, algo-rithms have been designed to maintain network connectivityinto P2P networks subject to departures and arrivals [12].Frameworks similar to open multi-agent arrivals have alsobeen considered with VTL for autonomous cars to deal withcross-sections [13], and in the context of trust and reputationcomputation, motivated by the need to determine which ar-riving agents may be considered reliable, see e.g. , the modelFIRE [14]. However, the study of these algorithms’ behavioris mostly empirical.Varying compositions were also studied in the context ofself-stabilizing population protocols [1], [15], where interact-ing agents (typically finite-state machines) can undergo tem-porary or permanent failures, which can respectively representthe replacement or the departure of an agent. The objective inthose works is to design algorithms that eventually stabilize onthe desired answer if the system composition stops changing, i.e. , once the system has become “0”closed.Opinion dynamics models with arrivals and departures havealso been empirically studied in [16], [17]. More generally,simulation-based analyses were performed in [18] for socialphenomena considering arrivals and departures. Dynamic con-sensus in open systems has also been investigated in [19] bothin terms of stability analysis and algorithm design.Finally, openness starts being considered for decentralizedoptimization, either indirectly through the use of variationsof the objective function in online decentralized optimizationin [20], or directly with stability analyses of decentralizedgradient descent in open systems in [21].II. S
YSTEM D ESCRIPTION
We consider a multi-agent system whose compositionevolves with time. We use integers to label the agents. Wedenote by N ( t ) ⊂ N the set of agents present in the systemat time t , and by n ( t ) the number of agents present at time t , i.e. , the cardinality of N ( t ) . Each agent i holds a value x i ( t ) ∈ R . We assume that the initial values of the agents attime t = 0 are randomly chosen according to some distribution D with mean and variance σ . The results can immediatelybe adapted to distributions D with arbitrary constant mean µ .We consider a continuous evolution of the time t . In thisconfiguration, events occur asynchronously at random timesand potentially impact the state and size of the system. Anevent (cid:15) takes place according to a Poisson clock of rate λ (cid:15) (which can depend on the size of the system n ( t ) : this willbe discussed more in details in Section III). We provide thepossible types of events that can occur below, where we denoteby x ( t − ) the state of the system right before an event takingplace at time t , and by x ( t + ) its state after. (a) Gossip: Two agents i, j ∈ N ( t − ) are uniformly ran-domly and independently selected among the n ( t − ) agentspresent in the system (with in particular the possibility ofselecting twice the same agent), and they update their values x i , x j by performing a pairwise average: x i ( t + ) = x j ( t + ) = x i ( t − ) + x j ( t − )2 . (1) (b) Arrival: One “0”new agent i (cid:54)∈ N ( s ) , ∀ s ≤ t ( i.e. , thathas never been in the system before), joins the system, so that N ( t + ) = N ( t − ) ∪ { i } and n ( t + ) = n ( t − ) + 1 . The initialvalue x i ( t + ) ∈ R of the arriving agent is drawn independentlyfrom the same distribution D as for the initialization of thesystem. (c) Departure: An agent i ∈ N ( t − ) is selected and leavesthe system, so that N ( t + ) = N ( t − ) \ { i } and n ( t + ) = n ( t − ) − . This event may only occur if n ( t − ) > . There areseveral ways to select the departing agent. By default, we willconsider a random departure , which consists in the randomuniform choice of the departing agent. We will also considerlater an adversarial departure , where no assumption can bemade on this choice.Notice that the way arrivals and departures are defined implythat an agent having left the system cannot go back into it.Moreover, all the random events above are assumed inde-pendent of each other. However, we will sometimes considerfor simplicity a “0”replacement event, which consists of theinstantaneous combination of a departure and an arrival: anagent leaves the system and is instantaneously replaced. A. Scale-independent quantities of interest
The aim of the study is to characterize the disagreementamong agents i.e. , the distance to consensus. We say thatconsensus is reached asymptotically when lim t →∞ max ( i,j ) ∈N ( t ) | x i ( t ) − x j ( t ) | = 0 . (2)If the system dynamics does not include agent departuresor arrivals, it is known that the gossip process we considerleads to consensus, see e.g. , [8], [22]. The objective hereis to understand how agent arrivals and departures impactthe disagreement among agents. To do so, we study severalquantities of interest. Because the system size may changesignificantly with time, we do not directly track x ( t ) , but ratherfocus on scale-independent quantities, i.e. , quantities whosevalues do not scale with the size of the system. We consider in particular the empirical mean of the squares and the variancedefined as x = 1 n (cid:88) i ∈N x i , Var ( x ) = 1 n (cid:88) i ∈N ( x i − ¯ x ) = x − ¯ x , (3)respectively, where references to time were removed to lightenthe notation. Our study will focus on the evolution of E Var ( x ) ,which will also require monitoring E ¯ x and E x . When newagents keep arriving it is impossible to achieve asymptotic con-sensus in the sense of (2), because the new agent’s value willwith high probability be different from the value of the agentsalready present in the system. The study of E Var ( x ) will allowus to see how “0”far the system will be from consensus. Theexpected mean E ¯ x could also have been monitored. It evolvesfollowing an independent one-dimensional linear system (see e.g. , [9] for an analysis of E ¯ x in a simplified setting). However,we omit this part of the study due to space limitation.III. P RELIMINARY TOOLS
We first state the following preliminary lemma that providesan upper bound on the expected value of both descriptors E ¯ x and E x , and that is proved in Appendix A. It will later allowus to bound the evolution of the variance independently of theother descriptors, and to perform further analyses. Lemma 1.
Consider the setting described above (where thevalues x i (0) are chosen in an i.i.d. way with expected value and variance σ ), and let t ≥ be some arbitrary fixedtime. Then, for any set S of agents in the system at time t , and for any fixed or stochastic (but independent of agentvalues) sequence of gossips, departures and arrivals, thereholds E (cid:0)(cid:80) i ∈ S x i ( t ) (cid:1) ≤ | S | σ . In particular, E (cid:0) ¯ x | n ( t ) = j (cid:1) ≤ j σ ; E (cid:16) x | n ( t ) = j (cid:17) ≤ σ . (4)Notice that at the beginning of the process, there holds E (¯ x (0) | n (0) = j ) = σ j and E (cid:16) x (0) | n (0) = j (cid:17) = σ .Hence, the above lemma states that the expected values of thedescriptors cannot exceed that they had at the initializationof the system. This follows that x j ( t ) is a combination ofcontributions from other agents that have potentially alreadyleft the system, and thus includes at least as much informationas if no interaction ever happened. A. Effect of the different events
We now show that the evolution of the expected moments E ¯ x and E x through events is governed by a 2-dimensionalaffine system from which we also derive the evolution of E Var ( x ) . To lighten the notations, we denote by X the vectorcontaining ¯ x and x so that Var ( x ) = ( − , X = x − ¯ x .We refer to x (resp. X ) as the state (resp. the correspondingdescriptors vector) as it is before the event that is considered,and to x (cid:48) (resp. X (cid:48) ) after. The proofs of the following lemmasare provided in Appendix B for the sake of completeness. The definition of the different types of events implies thatthe descriptors evolve differently depending on the system sizeat the moment the event takes place. Hence, we distinguish theevents according to both their type and the system size at themoment they occur. The occurrence of a given event then onlymakes sense if the system size allows it.
Lemma 2 (Gossip step) . In a system of initially n agentssubject to a gossip step event (noted Gos n ), there holds E ( X (cid:48) | X, Gos n ) = A Gos n X, (5) where A Gos n = (cid:18) n − n (cid:19) . (6) In particular, E ( Var ( x (cid:48) ) | Var ( x ) , Gos n ) = (cid:18) − n (cid:19) Var ( x ) . (7) Lemma 3 (Arrival) . In a system of initially n agents subjectto an arrival event (noted Arr n ), there holds E ( X (cid:48) | X, Arr n ) = A Arr n X + b Arr n , (8) where A Arr n = (cid:32) n ( n +1) nn +1 (cid:33) and b Arr n = (cid:18) n +1) n +1 (cid:19) σ . (9) In particular, E ( Var ( x (cid:48) ) | Var ( x ) , Arr n ) = nn + 1 (cid:18) Var ( x ) + σ + x n + 1 (cid:19) ≤ nn + 1 (cid:18) Var ( x ) + σ n (cid:19) . (10) Lemma 4 (Departure) . In a system of initially n agents subjectto a (random) departure event (noted Dep n ), there holds E ( X (cid:48) | X, Dep n ) = A Dep n X, (11) where A Dep n = (cid:18) − n − n − (cid:19) . (12) In particular, E ( Var ( x (cid:48) ) | Var ( x ) , Dep n ) = (cid:18) − n − (cid:19) Var ( x ) . (13)We remind that the above result holds for random depar-tures, where the leaving agent is randomly uniformly chosenamong those in the system before the departure.We now consider the random replacement of an agent,which consists of a random departure immediately followedby an arrival. The next result follows from a combination ofLemmas 4 and 3, the latter applied to a system of size n − joined by a n th agent. Lemma 5 (Replacement) . In a system of initially n agentssubject to a replacement event (noted Rep n ), there holds E ( X (cid:48) | X, Rep n ) = A Rep n X + b Rep n (14) where A Rep n = (cid:18) n − n n n − n (cid:19) and b Rep n = (cid:18) n n (cid:19) σ . (15) Moreover, E ( Var ( x (cid:48) ) | Var ( x ) , Rep n ) ≤ n − n − n Var ( x ) + n − n σ . (16) B. General descriptors evolution
We now provide a variation of well-known results onMarkov processes. This serves as a basis to characterize theexpected evolution of the descriptors conditioned by the sizeof the system. We show that it is linked with a flow equationgoverning the evolution of the system size. We begin byproviding a simple illustration of application of that result.
1) Introductory example:
Let us consider a Markov process n ( t ) that admits two states α and β . This process switchesfrom state α to β (resp. β to α ) according to a Poisson clock ofrate λ α → β (resp. λ β → α ), and remains unchanged in between.See Figure 2 for a graphical representation. Figure 2. Graphical representation of the two-states Markov process n ( t ) used as introductory example to illustrate Proposition 1. Upon this process is built a random process X ( t ) thatevolves with n ( t ) through its transitions (at events α → β and β → α ). In our example, let us assume E ( X (cid:48) | X, α → β ) = X E ( X (cid:48) | X, β → α ) = X + σ , where X (cid:48) and X respectively denote the state of X ( t ) after andbefore the event, and σ is some positive value representing e.g. some additive noise. Then it follows that E ( X (cid:48) | α → β ) = 12 E ( X | α → β ) = 12 E ( X | n = α ) ; E ( X (cid:48) | β → α ) = E ( X | β → α ) + σ = E ( X | n = β ) + σ . Our result will imply that the contributions of both statesof n ( t ) to the evolution of E X ( t ) satisfy the following flowequations ddt X α π α = − λ α → β X α π α + λ β → α (cid:0) X β + σ (cid:1) π β ; ddt X β π β = − λ β → α X β π β + λ α → β (cid:18) X α (cid:19) π α , where X α ( t ) = E ( X ( t ) | n ( t ) = α ) and π α ( t ) = P ( n ( t ) = α ) (the same holds for β ). It thus follows from E X ( t ) = X α ( t ) π α ( t ) + X β ( t ) π β ( t ) that ddt E X = − λ α → β X α π α + λ β → α π β σ .
2) General result:
We now derive a general version of theresult claimed above for a general Markov process n ( t ) ∈ N and a random process X ( t ) that both evolve through eventsand remain constant in between. The evolution of X ( t ) through some event (cid:15) is random, and its expected value followsthe affine system E ( X (cid:48) | X, (cid:15) ) = A (cid:15) X + b (cid:15) (where X (cid:48) denotesthe state of X ( t ) after the event).Notice that for an event (cid:15) to happen, the state of n ( t ) mustallow it: in the previous example, the event α → β only makessense at time t if n ( t ) = α . More generally, for an event (cid:15) ,we denote respectively by s ( (cid:15) ) and a ( (cid:15) ) the source and arrival states of n ( t ) for that event. Namely, if event (cid:15) happens at time t , then it implies that n ( t − ) = s ( (cid:15) ) and n ( t + ) = a ( (cid:15) ) . Hence,it follows that E ( X (cid:48) | (cid:15) ) = A (cid:15) E ( X | n = s ( (cid:15) )) + b (cid:15) . (17)Moreover, we also need to introduce the following assumption. Assumption 1.
For any event sequence ξ and for any j ∈ N , E ( X ( t ) | n ( t ) = j, ξ ) is nonnegative and uniformly upperbounded. Moreover, conditional to n ( t ) = j for some j ∈ N ,the probability for two events or more to happen between times t − δt and t for some δt > is in o ( δt ) . We can now state the following proposition, that is provedin Appendix C.
Proposition 1.
In the setting described above, and underAssumption 1, there holds ddt ( X j π j ) = − (cid:88) (cid:15) : s ( (cid:15) )= j λ (cid:15) X j π j (18) + (cid:88) (cid:15) : a ( (cid:15) )= j λ (cid:15) (cid:0) A (cid:15) X s ( (cid:15) ) + b (cid:15) (cid:1) π s ( (cid:15) ) , where the dependence to the time is omitted, and where X j ( t ) = E ( X ( t ) | n ( t ) = j ) and π j ( t ) = P ( n ( t ) = j ) . Corollary 1.
If for all events (cid:15) there holds E ( X (cid:48) | X, (cid:15) ) ≤ A (cid:15) X + b (cid:15) (with A (cid:15) nonnegative, i.e., all elements of A (cid:15) arenonnegative), there holds ddt ( X j π j ) ≤ − (cid:88) (cid:15) : s ( (cid:15) )= j λ (cid:15) X j π j (19) + (cid:88) (cid:15) : a ( (cid:15) )= j λ (cid:15) (cid:0) A (cid:15) X s ( (cid:15) ) + b (cid:15) (cid:1) π s ( (cid:15) ) . Proposition 1 depicts a flow equation describing the evo-lution of the contribution of one given state of n ( t ) to thatof E X = (cid:80) j ∈ N X j π j . Considering the simple case where X ( t ) = 1 , it follows that (18) reduces to simple Markovtransitions between the states of n ( t ) . The random process X ( t ) can be considered as some weight distribution assignedto the states of n ( t ) whose evolution is linked to the flow thatgoverns that of n ( t ) .In the context of this paper, n ( t ) is the size of the systemthat evolves as a Markov process, and the weight distribution X ( t ) is some quantity that evolves accordingly ( e.g. , thedescriptors). Proposition 1 will be used to describe how thedescriptors behave accordingly with a given size of the system,and will allow us to derive their global evolution with the time. IV. F IXED - SIZE SYSTEM
As a first case study, we assume that the number of agentsin the system remains constant, and the size of the systemis n . This assumption only allows for two types of eventsamong all those listed in the previous section to occur, namelygossips and replacements, respectively happening according toPoisson clocks of individual rates λ g and λ r . This means thatthe frequency of events for a given agent is independent of thesystem size n . Hence, in a system of size n , it is expected that nλ g and nλ r gossips and replacements respectively happenper unit of time. The expected number of gossips taking placebetween two replacements, given by the ratio ρ = λ g /λ r re-mains constant as n grows (the same holds for the probabilityfor a random event to be a replacement given by p = ρ ).We first provide the exact evolution of the descriptors of thesystem as well as its asymptotic behavior and the convergencerate of these descriptors. Then, we will consider an alternativedefinition of replacement events. A. Descriptors evolution and fixed points
We first consider our default definition of replacements,namely random replacements where the replaced agent israndomly uniformly chosen among those presently in thesystem. The following result, based on Lemmas 2 and 5, andon Proposition 1 (where Lemma 1 ensures the validity ofAssumption 1), describes the expected evolution of the system.
Theorem 1.
In a system of fixed size subject to randomreplacements and gossips, there holds ddt ( E X ( t )) = (cid:18) − λ r λ r n λ g − ( λ g + λ r ) (cid:19) E X ( t ) + (cid:18) n (cid:19) σ . (20) Proof.
The proof directly follows the application of Propo-sition 1 combined with Lemmas 2 and 5, where π n ( t ) = 1 and X n ( t ) = E X ( t ) for any t , and where we remind that λ Gos n = nλ g and λ Rep n = nλ r . This leads to the following: ddt E X ( t ) = − n ( λ r + λ g ) E X ( t )+ nλ r ( A Rep n E X ( t ) + b Rep n ) + nλ g A Gos n E X ( t ) . A few algebraic steps then conclude the proof.One can verify that the fixed point of (20) is E ¯ x | eq = 2 + ρ n (1 + ρ ) − ρ σ (21) E x | eq = 2 n + ρ n (1 + ρ ) − ρ σ (22)leading to a variance E Var ( x ) | eq = 1 − n ρ (cid:0) − n (cid:1) σ ∼ n →∞ σ ρ + 1 ., (23)where we remind ρ = λ g /λ r is the ratio between gossip andreplacement rates.The asymptotic values of these expressions show some in-terpretation. Suppose first that ρ → , meaning that no gossipever takes place. The variance is then E Var ( x ) | eq = n − n σ ,and an expected squared mean E ¯ x | eq = σ n . This is consistent with a process where agents are only replaced, i.e. , a systemeventually consisting of agents with n random i.i.d. values withmean 0 and variance σ . This is also the fixed point of theaffine equation in Lemma 5. For ρ → ∞ , the expected numberof gossips steps between two replacements tends to infinity, sothat a perfect averaging takes place before any replacement.We obtain in that case a variance E Var ( x ) | eq = 0 (thisessentially corresponds to the system achieving a consensus),and an expected squared mean E ¯ x | eq = σ n − . This latternumber is lower than what would be obtained by averaging n i.i.d. values. This is because it actually results from a weightedaverage of the values of all agents having been part of thesystem at some present or past time.For large values of n , which we remind have no impact onthe rate ratio ρ = λ g /λ r , the expected squared mean E ¯ x | eq goes to 0, while the variance E Var ( x ) | eq goes to σ ρ . Thoseresults are parallel with that of [10], since the variance can beexpressed as pσ , with p the probability for a random eventto be a replacement ( p = ρ ). More precisely, [10] reliedon the average number of gossips between two replacements,given by ρ = p − . Notice moreover that increasing the gossiprate makes the variance decay as more gossip steps allows thesystem to get closer to consensus; in opposition increasing thereplacement rate makes it increase.To illustrate Theorem 1, we consider an open system withrandom events (replacements or gossip steps) with n = 50 agents, such that on average one in twenty events is areplacement ( ρ = 19 , or equivalently with the probability for arandom event to be a replacement being p = 0 . ). The agentvalues are randomly drawn from a normal distribution withzero mean and constant variance σ = 1 . Figure 3 displaysthe expected evolution of the agents values x i ( t ) as well as theexpected evolution of the descriptors with the time simulatedover realizations. Those match accurately the theoret-ical expectations from Theorem 1 as well as with (21) and(22). The top plot exhibits that, even though convergence isobviously not achieved due to the openness of the system, theexpected behavior of the estimates still presents a contractingtendency in this configuration. Besides, the bottom plots showthe magnitude and convergence speed of the descriptors andallow for deducing those of the variance. In this configuration,gossip steps are frequent enough for information about agentshaving already left to last longer in the system: this implies asmall squared mean ¯ x as compared to the mean of squares x that largely dominates in magnitude in the estimation of thevariance. Nevertheless, the numerous gossip steps allow themean of squares x to converge way faster than the squaredmean ¯ x that needs arrivals to evolve. Notice that this trendcould be reversed in a smaller system.The illustration provided in the introduction (Figure 1) wasobtained for an open multi-agent system of the same kind with n = 4 agents and ρ = 9 ( i.e. , p = 0 . ). Notice that in suchsmaller and slower system, the squared mean ¯ x has muchmore impact than in the example above. Remark 1.
It is worth pointing out that a similar result canbe deduced using Corollary 1 directly applied on the variance,and combined with Lemmas 2 and 5 to derive the following
Time -0.02-0.0100.010.020 0.5 1 1.5 2 2.5 3
Time
SimulatedTheoreticalFixed Point
Time
SimulatedTheoreticalFixed Point
Time
SimulatedTheoreticalFixed Point
Figure 3. Illustration of the behavior of an open system of agents subjectto random replacements and gossips with rates λ r = 1 and λ g = 19 , so that λ r + λ g = 20 . (a) depicts the expected evolution over realizationsof the estimates of the agents as well as the average of those. (b), (c) and(d) respectively depict the expected evolution of the mean of squares x , thesquared mean ¯ x and the variance Var ( x ) simulated over realizationsin plain blue line. Those are compared with the theoretical results as obtainedin Theorem 1 in dashed red line (the variance is deduced from Var ( x ) = x − ¯ x ). The expected fixed points of those from equations (21), (22) and(23) are also depicted in yellow dotted line. upper bound on the evolution of the variance: ddt E Var ( x ( t )) ≤ − (cid:18) λ g + λ r + λ r n (cid:19) E Var ( x ( t ))+ n − n λ r σ . (24) This result is weaker than that of Theorem 1, that allows forderiving an exact equality for the evolution of the expectedvariance. This highlights how following two such descriptorsallows for more detailed analysis in this case, even if one wereonly interested in the variance. An illustration of this boundis given in Figure 4.B. Convergence rate
We now analyze the rate at which the expected momentswill eventually converge to the fixed point described above in(21) and (22). One can show that the eigenvalues of the matrixin (20) are given by r , = − λ g − λ r ± √ ∆2 , (25)where ∆ = ( λ g − λ r ) + 4 λ g λ r n , so that √ ∆ = | λ g − λ r | + o (cid:0) n (cid:1) . Hence, we have r = − ( λ g + λ r ) + o (cid:18) n (cid:19) ; r = − λ r + o (cid:18) n (cid:19) . (26)Notice that those are the diagonal elements of the matrix of(20). The corresponding eigenvectors are then given by v , = (cid:16) ρ − ± √ ( ρ − +4 ρ/n ρ , (cid:17) T , (27)which, using the same reasoning as for the eigenvalues, leadsto the following eigenvectors for large values of n : v = (cid:0) o (1) , (cid:1) T ; v = (cid:16) ρ − ρ + o (1) , (cid:17) T . (28) Interpretation:
First notice that for positive values ofthe gossip and replacement rates, both eigenvalues r and r are negative, and ensure the convergence of the descriptorstowards their respective fixed points.When λ g (cid:29) λ r , the impact of v quickly vanishes ascompared to that of v . Moreover, for large values of λ g , v ≈ (1 , T . Both the squared mean E ¯ x and the meanof squares E x are thus expected to converge at the samespeed, which is characterized by r ≈ − λ r . More precisely,the system is expected to achieve almost-consensus betweeneach replacement, so that the descriptors converge to the samevalue, and thus the variance to . Since the system fullyincorporates the effect of newly arrived agents, gossip stepsend up having no impact on the convergence speed (this isobservable from Lemma 2 considering that both descriptorshave the same magnitude), and only the replacement ratedefines the convergence speed.When λ g (cid:28) λ r , the impact of v quickly vanishes. Theconvergence thus follows v = ( o (1) , T , whose squaredmean component E ¯ x is vanishingly small, so that the variance E Var ( x ) becomes essentially equivalent to the mean of squares E x . It follows from Lemmas 2 and 5 that this quantity is contracted by a factor n − n at each event, no matter if itis a replacement or gossip (when neglecting the effect of E ¯ x and of external terms). The convergence speed is thusequivalently impacted by the replacement and gossip rates,which corresponds to the eigenvalue r ≈ − ( λ r + λ g ) .Observe that there is a transition when λ g = λ r , wherereplacements and gossip steps are as frequent as each other: itfollows that r = r and v = v = ( o (1) , T . Furthermore,if λ r = 0 ( i.e. , the system closes), then it follows that r = 0 . No more external contributions are expected andthe squared mean E ¯ x remains unchanged. The mean ofsquares E x ultimately converges to the same value as E ¯ x ,and the variance asymptotically decays to zero following theconvergence rate of E x , that is conditioned by the frequencyof gossip steps. C. The case of adversarial replacements
In this section, we quickly envision an alternative way ofdefining the choice of the leaving agent at replacements. Upto now, it was assumed that this agent was uniformly andrandomly chosen among those presently in the system. Wewill now consider replacements such that no assumption canbe made on that choice, and refer to such events as adversarialreplacements . These are a more general formulation of re-placements that include those that were previously considered.We provide an upper bound on the evolution of the expectedvariance in that case that is similar to that of (24). For thatpurpose, we first state the following lemma that characterizesthe effect of adversarial departure and replacement events onthe expected variance, where we remind x denotes the state ofthe system before the event, and x (cid:48) after. The proof is providedin Appendix D. Lemma 6. [Adversarial departure and replacement] In a sys-tem of initially n agents (in the setting described in Section II)subject to an adversarial departure event (noted Dep ∗ n ), thereholds E ( Var ( x (cid:48) ) | Var ( x ) , Dep ∗ n ) ≤ n Var ( x ) n − . (29) Moreover, in the case of an adversarial replacement (noted
Rep ∗ n ), there holds E ( Var ( x (cid:48) ) | Var ( x ) , Rep ∗ n ) ≤ E Var ( x ) + σ n (30)We can then provide the following theorem based on Corol-lary 1, where Lemma 1 ensures the validity of Assumption 1. Theorem 2.
In a system of fixed size subject to adversarialreplacements and to random gossips, there holds ddt ( E Var ( x ( t ))) ≤ − λ g E Var ( x ( t )) + λ r σ . (31) Proof.
The proof directly follows the application of Corol-lary 1 where π n ( t ) = 1 and X n ( t ) = E X ( t ) , combined withLemmas 2 and 6.Figure 4 compares the evolution of the expected variance forboth random and adversarial replacements through simulation,and with their respective upper bounds from (24) and from Theorem 2. The values of the agents are randomly drawnfrom a normal distribution of zero mean and with σ = 1 .Adversarial replacements are here defined as the arbitrarychoice of the agent j with minimal | x j ( t ) | at the time of thereplacement. Notice that the upper bound (31) is more conser-vative than (24), and that the expected variance is larger foradversarial replacements than for random replacements. Thisdirectly follows from the fact that the adversarial definition ofreplacements includes random replacements. More generally,adversarial replacements allow arbitrarily choosing the leavingagent at replacements, so that it includes the “0”worst casedeparture with respect to the evolution of the variance. Time
Fixed PointSimulated (random)Upper bound (random)Simulated (adversarial)Upper bound (adversarial)
Figure 4. Evolution of the expected variance of an open system of agentssubject to gossip steps (with rate λ g = 9 ) and to replacements (with λ r = 1 );in plain blue line for random replacements, and in dash-dotted yellow line foradversarial replacements. The red dashed line provides the upper bound on theexpected variance for random replacements obtained in (24), and the purpledotted line the upper bound from Theorem 2 for adversarial replacements. Theblack dashed line is the fixed point of the variance for random replacements. V. V
ARYING - SIZE SYSTEM
We now consider a fully open system: agents can joinand leave the system independently, and no assumption ismade anymore on the size of the system at time t . Agentsarrive in the system according to a Poisson clock with rate λ a that is independent of the state and size of the system.In addition, any present agent leaves the system according toanother Poisson process of rate λ d , so that the occurrence ofa departure at the system scale follows a Poisson process withrate n ( t ) λ d . Similarly, gossip interactions happen at the systemscale according to a Poisson process with rate n ( t ) λ g , so thatthe expected number of interactions in which a single agentis involved per unit of time does not depend on n ( t ) .In addition, we define n = λ a /λ d the number of agents inthe system such that arrivals and departures happen with equalprobability (that is also interpreted as the average number ofarrivals happening in the system before a given agent leaves).We also define γ = λ g /λ d , the ratio between gossip anddeparture rates (that is proportional to the expected number ofgossips experienced by an agent before leaving the system). A. System size evolution
Provided the definitions of the events that can happen inthe system, it appears that the size of the system n ( t ) is abirth-death process where the birth rate is given by the arrivalrate λ a , and the death rate by the departure rate λ d . Figure 5. Graphical representation of the birth-death process defining theevolution of the system size n ( t ) in terms of the arrival and departure rates. Let us denote π i ( t ) = P ( n ( t ) = i ) , then the following resultis known from standard properties of birth-death process. Lemma 7.
Assume that n = λ a /λ d < ∞ , then for all i ∈ N there exist steady state probabilities π ∗ i = lim t →∞ π i ( t ) satisfying π ∗ i = lim t →∞ π i ( t ) = n i i ! e − n . (32) Proof.
There holds from standard results on birth-death pro-cesses if n ( t ) is ergodic that π ∗ i = lim t →∞ π i ( t ) exists andsatisfies the following: π ∗ = 11 + (cid:80) ∞ j =1 (cid:81) ji =1 λ a iλ d = 11 + (cid:80) ∞ k =1 n k k ! = e − n , and π ∗ k = π ∗ k (cid:89) i =1 λ a iλ d , = n k k ! e − n . This concludes the proof for n = λ a /λ d < ∞ which impliesthe ergodicity of n ( t ) .One can verify from Lemma 7 that lim t →∞ E n ( t ) = n , and lim t →∞ E n ( t ) = n + n , (33)and hence that the asymptotic variance of n ( t ) is given by lim t →∞ E Var ( n ( t )) = E n ( t ) − ( E n ( t )) = n . (34)Notice that those do not depend on the initial size of thesystem, but only on the arrival and departure rates. Moreover,it means that the asymptotic distribution of the system size israther concentrated around n . B. Descriptors evolution
We now study the evolution of the descriptors conditionedby the size of the system. This will allow us characterizingthe evolution of these descriptors, and establishing someasymptotic behaviors. In the sequel, we refer to X j ( t ) = E ( X ( t ) | n ( t ) = j ) , where we remind that X = (cid:16) ¯ x , x (cid:17) T . Proposition 2.
The evolution of X j ( t ) π j ( t ) is given by thefollowing: ddt X j ( t ) π j ( t ) = (cid:0) j ( A Gos j − I ) λ g − jλ d − λ a (cid:1) X j ( t ) π j ( t )+ λ a (cid:0) A Arr j − X j − ( t ) + b Arr j − (cid:1) π j − ( t )+ ( j + 1) λ d A Dep j +1 X j +1 ( t ) π j +1 ( t ) . (35) Proof.
The proof follows the application of Proposition 1 com-bined with Lemmas 2, 3 and 4 with n ( t ) = j , and for whichthere holds λ Gos j = jλ g , λ Arr j = λ a and λ Dep j = jλ d . Corollary 2.
Assume there exists X ∗ j = lim t →∞ X j ( t ) for all j ∈ N , so that ddt (cid:0) X ∗ j π ∗ j (cid:1) = 0 , then there holds (cid:0) n + j + j ( I − A Gos j ) γ (cid:1) X ∗ j = A Dep j +1 X ∗ j +1 n + jA Arr j − X ∗ j − + jb Arr j − . (36) Proof.
Lemma 7 allows writing π ∗ j = n j j ! e − n , where weremind n = λ a /λ d . Equation (35) evaluated on X ∗ j π ∗ j thenbecomes (cid:0) λ a + ( I − A Gos j ) jλ g + jλ d (cid:1) X ∗ j n j j ! e − n = ( j + 1) λ d A Dep j +1 X ∗ j +1 n j +10 ( j + 1)! e − n + jλ a A Arr j − X ∗ j − n j − ( j − e − n + λ a b Arr j − n j − ( j − e − n . Reminding that γ = λ g /λ d , dividing by λ d on both sides andperforming a few algebraic steps lead to the conclusion. C. Bound on state variance
We characterize the evolution of the expected state variance E ( Var ( x ( t ))) = ( − , E X ( t ) . For this purpose, we lightenthe notations by defining V j ( t ) = E ( Var ( x ( t )) | n ( t ) = j ) and V ( t ) = E Var ( x ( t )) . We will also drop the dependence tothe time of V j ( t ) , V ( t ) , and π j ( t ) for the remainder of thissection for concision matters. Moreover, we use the superscript“0” ∗ to refer to the asymptotic value of those quantities, e.g. , V ∗ = lim t →∞ V ( t ) . Proposition 3.
The evolution of V j π j satisfies the following: ddt ( V j π j ) ≤ λ a (cid:18) j − j V j − + σ j (cid:19) π j − + ( j + 1) λ d (cid:18) − j (cid:19) V j +1 π j +1 − ( λ g + λ a + jλ d ) V j π j . (37) Proof.
The proof follows the application of Corollary 1 com-bined with Lemmas 2, 3 and 4 with n ( t ) = j , and for whichthere holds λ Gos j = jλ g , λ Arr j = λ a and λ Dep j = jλ d . Corollary 3.
Assume there exists V ∗ j = lim t →∞ V j ( t ) , suchthat ddt (cid:0) V ∗ j π ∗ j (cid:1) = 0 , then there holds ( n + j + γ ) V ∗ j ≤ ( j − V ∗ j − + (cid:18) − j (cid:19) n V ∗ j +1 + σ . (38) Proof.
Lemma 7 gives π ∗ j = n j j ! e − n , where we remind n = λ g /λ d . Equation (37) then yields ( λ a + jλ g + λ d ) V ∗ j n j j ! e − n ≤ (cid:18) j − j + σ j (cid:19) λ a n j − ( j − e − n + (cid:18) − j (cid:19) λ d n j +10 ( j + 1)! e − n . Reminding that γ = λ g /λ d , dividing by λ d on both sides andperforming a few algebraic steps lead to the conclusion.We can now derive results on the asymptotic behavior ofthe expected variance. Proposition 4.
For any positive sequence of numbers ( z j ) j ∈ N such that z = 0 , and satisfying for all j ≥ : ( n + j + γ ) z j ≥ jz j +1 + (cid:18) − j − (cid:19) n z j − + n j j ! e − n , (39) there holds V ∗ ≤ ∞ (cid:88) j =0 z j σ . (40) Proof.
Let ˜ V ≥ be the vector containing the elements V ∗ j .There holds V ∗ = π T ˜ V where π is the vector such that [ π ] j = π ∗ j = n j j ! e − n . From Corollary 3, there holds A ˜ V ≤ σ ,with [ A ] j,j = ( n + j + γ ) , [ A ] j,j +1 = (cid:16) − j (cid:17) n , and [ A ] j +1 ,j = − j for all j ≥ .Suppose now that there is a vector z ≥ such that A T z exists ( i.e. , [ A T z ] i = (cid:80) j [ A ] ji [ z ] j converges for all i ), andsatisfying π ≤ A T z , then there holds V ∗ = π T V ≤ (cid:0) A T z (cid:1) T ˜ V = z T Ax ∗ ≤ z T σ , which concludes the proof.Note that the above proof is inspired by duality, since z a feasible dual solution to max x ∈ R + π T x subject to Ax ≤ σ , which provides a feasible upper bound on V ∗ . In thefollowing theorem, we provide an upper bound on the expectedasymptotic variance based on the previous proposition. Theorem 3.
In a system subject to random arrivals anddepartures where agents perform gossip updates, assumingthat there exists V ∗ j = lim j →∞ V j ( t ) , there holds V ∗ ≤ (cid:0) − e − n (cid:1) σ γ . (41) Proof.
One can verify that the sequence ( z j ) j ∈ N with z = 0 and z j = n j j !(1+ γ ) e − n for j ≥ satisfies (39), and it followsfrom Proposition 4 that V ∗ ≤ σ γ e − n ∞ (cid:88) j =1 n j j ! = σ γ e − n ( e n − . This concludes the proof.
Remark 2.
Interestingly, the bound (41) from Theorem 3 issimilar to (23) , the expected variance obtained for systems offixed size as n gets large (i.e. σ ρ with ρ = λ g /λ r ). Indeed, γ = λ g /λ d depicts the expected number of gossips happeningin the system before a departure occurs, whereas ρ depictsthat expected number before a replacement happens, whichcan be assimilated to a departure event for fixed size systems. The bound derived in Theorem 3 is not the only one that canbe obtained from Proposition 4, since any feasible z leads to abound. One can compute the “0”best possible upper bound onthe expected variance in the sense of Proposition 4 by solving min z T zσ subject to A T z ≥ π, (42)(where the notations are those from the proof of Proposi-tion 4). This bound corresponds, up to the duality gap, to max x ∈ R + π T x with x satisfying (38). This has been solvedapproximately numerically, and the derived bound is depictedin Figure 6, as well as that from Theorem 3, compared withresults from an actual simulation.It appears from the figure that the bounds are rather con-servative for small values of γ , however this corresponds tovery chaotic systems where agents are expected to performvery few interactions before leaving. This conservatism ispartly due to Lemma 3 for which only an upper bound isprovided on the expected variance. In particular, it generatesa strong conservatism for small values of n . The additionalconservatism observed for the bound of Theorem 3 followsthat the sequence used to derive it is not optimal in the senseof Proposition 4, which also inherently adds conservatism byitself. Nevertheless, the bound from Theorem 3 becomes veryclose to the “0”best bound (42) as soon as γ ≈ . These areboth quite accurate at this point, and closely match the actualvariance starting from γ ≈ , which corresponds to agentsinteracting on average 10 times before leaving. Figure 6. Asymptotic expected variance of a system initially constituted of agents with zero mean and σ = 1 , subject to random arrivals (with rate λ a = 1 ), random departures (with rate λ d = 1 ), and random gossips (withvarying rate λ g ), in terms of γ = λ g /λ d . The simulated variance is providedin dotted blue line, whereas the upper bounds from Theorem 3 and (42) arerespectively in red plain line and dashed yellow line. VI. C
ONCLUSIONS
In this paper, we considered the possibility for agents toget in and out of a system, then called open. We focusedon the behavior of a classical multi-agent algorithm in thatcontext: all-to-all pairwise gossips. Whereas openness arisesseveral challenges, including variations of the dimension ofthe system and the absence of usual convergence, we haveshown that these systems can be characterized in terms ofscale-independent quantities whose evolution is governed byfixed size linear systems. We obtained the evolution of the twofirst moments and of the variance for both fixed and variablesize open systems through the use of continuous-time toolsto model the asynchronous events. We observed that the opentrait of a system may result in a significant performance dropin terms of variance reduction as compared to closed systems.This analysis through scale-independent quantities can beextended to more general problems relying on more complexdefinitions of arrivals and departures, and on different types ofinteractions. More generally, we believe it can be a useful toolto study the behavior of open multi-agent systems in general,and a natural continuity of this work could be the analysis of interactions restricted to a graph. Another challenge leftuntackled consists in establishing the probability distributionof our descriptors instead of only their expected value.R
EFERENCES[1] C. Delporte-Gallet, H. Fauconnier, R. Guerraoui, and E. Ruppert, “Whenbirds die: Making population protocols fault-tolerant,” in
InternationalConference on Distributed Computing in Sensor Systems . Springer,2006, pp. 51–66.[2] J. Predd, S. Kulkarni, and H. V. Poor, “Distributed learning in wirelesssensor networks,”
Signal Processing Magazine, IEEE , vol. 23, pp. 56–69, 07 2006.[3] F. Molinari and J. Raisch, “Efficient consensus-based formation controlwith discrete-time broadcast updates,” in , Dec 2019, pp. 4172–4177.[4] M. Abdelrahim, J. M. Hendrickx, and W. M. Heemels, “Max-consensusin open multi-agent systems with gossip interactions,” in
Proceedingsof the 56th IEEE CDC , 12 2017, pp. 4753–4758.[5] Z. A. Z. Sanai Dashti, C. Seatzu, and M. Franceschelli, “Dynamicconsensus on the median value in open multi-agent systems,” in , Dec 2019, pp.3691–3697.[6] C. Monnoyer de Galland and J. M. Hendrickx, “Lower bound perfor-mances for average consensus in open multi-agent systems,” in , Dec 2019, pp.7429–7434.[7] ——, “Fundamental performance limitations for average consensus inopen multi-agent systems,” arXiv e-prints , 2020.[8] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossipalgorithms,”
IEEE/ACM Transactions on Networking (TON) , vol. 14,no. SI, pp. 2508–2530, 2006.[9] J. M. Hendrickx and S. Martin, “Open multi-agent systems: Gossipingwith deterministic arrivals and departures,” , 2016.[10] ——, “Open multi-agent systems : Gossiping with random arrivalsand departures,” , 2017.[11] C. Carrascosa, A. Giret, V. Julian, M. Rebollo, E. Argente, and V. Botti,“Service oriented mas: an open architecture,” in
Proceedings of The8th International Conference on Autonomous Agents and MultiagentSystems-Volume 2 . International Foundation for Autonomous Agentsand Multiagent Systems, 2009, pp. 1291–1292.[12] A. Giret, V. Julian, M. Rebollo, E. Argente, C. Carrascosa, and V. Botti,“An open architecture for service-oriented virtual organizations,” in
Lecture Notes in Computer Science, vol 5919 , 09 2010, pp. 118–132.[13] O. K. Tonguz, “Red light, green light-no light: Tomorrow’s commu-nicative cars could take turns at intersections,”
IEEE Spectrum , vol. 55,no. 10, pp. 24–29, Oct 2018.[14] T. D. Huynh, N. R. Jennings, and N. R. Shadbolt, “An integrated trustand reputation model for open multi-agent systems,”
Autonomous Agentsand Multi-Agent Systems , vol. 13, no. 2, pp. 119–154, 2006.[15] D. Angluin, J. Aspnes, M. J. Fischer, and H. Jiang, “Self-stabilizingpopulation protocols,”
ACM Transactions on Autonomous and AdaptiveSystems (TAAS) , vol. 3, no. 4, p. 13, 2008.[16] J. T¨or¨ok, G. I˜niguez, T. Yasseri, M. San Miguel, K. Kaski, and J. Kert´esz,“Opinions, conflicts, and consensus: modeling social dynamics in acollaborative environment,”
Physical review letters , vol. 110, no. 8, p.088701, 2013.[17] G. I˜niguez, J. T¨or¨ok, T. Yasseri, K. Kaski, and J. Kert´esz, “Modelingsocial dynamics in a collaborative environment,”
EPJ Data Science ,vol. 3, no. 1, p. 1, 2014.[18] P. Sen and B. K. Chakrabarti,
Sociophysics: an introduction . OxfordUniversity Press, 2013.[19] M. Franceschelli and P. Frasca, “Proportional Dynamic Consensus inOpen Multi-Agent Systems,” in , Miami, FL, United States, Dec. 2018, pp. 900–905. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01945840[20] E. Hazan, “Introduction to online convex optimization,”
Foundations andTrends in Optimization , vol. 2, pp. 157–325, 01 2016.[21] J. M. Hendrickx and M. G. Rabbat, “Stability of decentralized gradientdescent in open multi-agent systems,” (To appear in Proceedings of CDC2020).[22] F. Fagnani and S. Zampieri, “Randomized consensus algorithms overlarge scale networks,”
IEEE Journal on Selected Areas in Communica-tions , vol. 26, no. 4, pp. 634–649, May 2008. A PPENDIX
A. Proofs of Lemma 1 (Upper bound on the descriptors)
Let us fix some time t ≥ and a deterministic sequenceof events (constituted of arrivals, departures and gossip steps)starting at time such that N ( t ) = S .Let T be the set of the labels of all the agents that havebeen in the system at some time between times and t (forsimplicity those are assumed to be all different). Moreover,define ξ ( s ) for ≤ s ≤ t the |T | -dimensional vectorcontaining the values x i ( s ) of all agent i ∈ T at time s : inthat virtual system, agents that already left the actual systemat time s are assumed to keep the value they held at theirdeparture, and agents that did not join the actual system yet areassumed to hold the value with which they will be initializedat their arrival. Notice that in the virtual system, the onlyeffect of arrivals and departures is to respectively induce orstop the modifications of the corresponding values in ξ ( s ) .Gossip iterations result in the multiplication of ξ by a doubly-stochastic matrix, and thus there holds ξ ( t ) = Aξ (0) for somefixed doubly-stochastic matrix A . Let us compute (cid:88) i ∈ S x i ( t ) = (cid:88) i ∈ S ξ i ( t ) = (cid:88) i ∈ Sj ∈T A ij ξ j (0) = (cid:88) j ∈T w j ξ j (0) , for w j = (cid:80) i ∈ S A ij ≥ . Since the initial values held in ξ j (0) are all i.i.d. with E ξ j (0) , there holds E (cid:32)(cid:88) i ∈ S x i ( t ) (cid:33) = E (cid:88) j ∈T w j ξ j (0) = (cid:88) j ∈T w j E (cid:0) ξ j (0) (cid:1) + (cid:88) j,k ∈T j (cid:54) = k w j w k E ( ξ j (0) ξ k (0))= σ (cid:88) j ∈T w j , where the absence of correlation between the initial values ξ j (0) is used to nullify the crossed products in the last equality.Since A is doubly stochastic, there holds for all j ∈ T that w j = (cid:80) i ∈ S A ij ≤ (cid:80) i ∈T A ij = 1 , and thus w j ≤ w j .Moreover, (cid:80) j ∈T w j = (cid:80) i ∈ S (cid:80) j ∈T A ij = | S | , and it follows E (cid:32)(cid:88) i ∈ S x i ( t ) (cid:33) = σ (cid:88) j ∈T w j ≤ σ (cid:88) j ∈T w j = | S | σ , which proves the result for a deterministic sequence of events.Since the events (arrivals, departures and gossip steps) areindependent of the values held by the agents, one can extendthe result above to stochastic event sequences by consideringits expected value over all possible event sequences.Finally, (4) follows that E (cid:0) ¯ x | n ( t ) = j (cid:1) = 1 j E (cid:32)(cid:88) i ∈ S x i ( t ) (cid:33) ; E (cid:16) x | n ( t ) = j (cid:17) = 1 j (cid:88) i ∈ S E ( x i ( t )) ≤ j E (cid:32)(cid:88) i ∈ S x i ( t ) (cid:33) . B. Proofs of Lemmas 2, 3, 4 and 5 (Effect of events)1) Proof of Lemma 2 (Gossip step):
Let us first fix thenodes i, j involved in the gossip step. Firstly, observe that x (cid:48) i + x (cid:48) j = 2 x i + x j = x i + x j , and that x (cid:48) k = x k for all k (cid:54) = i, j . Hence ¯ x (cid:48) = ¯ x , which establishes the fist line of (5).Secondly, since x k = x (cid:48) k for every k (cid:54) = i, j , there holds x (cid:48) = 1 n n (cid:88) k =1 x (cid:48) k = x + 1 n (cid:32) (cid:18) x i + x j (cid:19) − x i − x j (cid:33) = x + 1 n (cid:18) x i x j − x i − x j (cid:19) . (43)Observe that E ( x i | x ) = E ( x j | x ) = x and E ( x i x j | x ) = ¯ x .Taking the expectation with respect to i and j in (43) yields E ( x (cid:48) | x ) = (cid:18) − n (cid:19) x + 1 n ¯ x , from which the second line of (5) follows.Finally, (7) follows the direct computation of E ( Var ( x (cid:48) ) | Var ( x ) , Gos n ) = ( − , E ( X (cid:48) | X, Gos n ) , which concludes the proof.
2) Proof of Lemma 3 (Arrival):
We label n + 1 the arrivingagent for simplicity, so that x (cid:48) k = x k for all k ≤ n . We beginby computing the new average : ¯ x (cid:48) = 1 n + 1 (cid:32) x (cid:48) n +1 + n (cid:88) k =1 x k (cid:33) = nn + 1 ¯ x + 1 n + 1 x (cid:48) n +1 . (44)Since E x (cid:48) n +1 = 0 , we have E (¯ x (cid:48) | x ) = nn +1 ¯ x . By exactly thesame reasoning but using E x (cid:48) n +12 = σ we also obtain E ( x (cid:48) | x ) = nn + 1 x + 1 n + 1 σ , (45)from which the second line of (8) follows. Turning to the firstline, we obtain from (44) E (¯ x (cid:48) | x ) = 1( n + 1) (cid:0) n ¯ x + n ¯ x E x (cid:48) n +1 + E x (cid:48) n +1 (cid:1) = n ( n + 1) ¯ x + 0 + 1( n + 1) σ . Finally, (10) follows the direct computation of E ( Var ( x (cid:48) ) | Var ( x ) , Arr n ) = ( − , E ( X (cid:48) | X, Arr n ) , where we use Lemma 1 to obtain the inequality.
3) Proof of Lemma 4 (Departure):
Let j be the randomlyselected agent that leaves the system. It follows ¯ x (cid:48) = 1 n − (cid:32)(cid:32) n (cid:88) k =1 x k (cid:33) − x j (cid:33) = 1 n − n ¯ x − x j ) . (46)By exactly the same reasoning, there holds x (cid:48) = n − (cid:16) nx − x j (cid:17) . Since j is randomly selected, E ( x j | x ) = x . Hence, E ( x (cid:48) | x ) = 1 n − (cid:16) n E x − E x (cid:17) = E x , which implies the second line of (11). For the first line, takinginto account E ( x j | x ) = ¯ x , it follows from (46) that E (¯ x (cid:48) | x ) = 1( n − (cid:0) n ¯ x − n ¯ x E ( x j | x ) + E ( x j | x ) (cid:1) = n − n ( n − ¯ x + 1( n − x . Finally, (13) follows the direct computation of E ( Var ( x (cid:48) ) | Var ( x ) , Dep n ) = ( − , E ( X (cid:48) | X, Dep n ) , which concludes the proof.
4) Proof of Lemma 5 (Replacement):
The matrix equality(14) follows from a combination of Lemmas 4 and 3, the latterapplied to a system of size n − joined by an n th agent. Theinequality (16) follows E ( Var ( x (cid:48) ) | x, Rep n )= − n − n ¯ x − n x − σ n + n − n x + σ n = − n − nn ¯ x + n − n − n x + ( n − σ n = n − n ¯ x + n − n − n Var ( x ) + ( n − σ n ≤ n − n − n Var ( x ) + ( n − σ n , where we used Lemma 1 for the last inequality. C. Proof of Proposition 1
Let us define the indicator random variable χ i ( t ) = (cid:40) , if n ( t ) = i ;0 , otherwise . Then there holds E ( χ i ( t ) X ( t )) = E ( χ i ( t ) X ( t ) | n ( t ) = i ) π i ( t )+ E ( χ i ( t ) X ( t ) | n ( t ) (cid:54) = i ) (1 − π i ( t )) . By definition, E ( χ i ( t ) X ( t ) | n ( t ) (cid:54) = i ) = 0 and E ( χ i ( t ) X ( t ) | n ( t ) = i ) = E ( X ( t ) | n ( t ) = i ) , and it followsfor all i ∈ N E ( χ i ( t ) X ( t )) = X i ( t ) π i ( t ) . Let us fix some j ∈ N and some time t , and define somesmall δt > . Moreover, define E , E , E ≥ the eventsthat respectively zero, exactly one, and more than two eventshappened between times t and t + δt such that n ( t + δt ) = j .Define also E ∗ the event that any other sequence of eventshappened, then X j ( t + δt ) π j ( t + δt ) = E ( χ j ( t + δt ) X ( t + δt ))= E ( χ j ( t + δt ) X ( t + δt ) | E ) P ( E )+ E ( χ j ( t + δt ) X ( t + δt ) | E ) P ( E )+ E ( χ j ( t + δt ) X ( t + δt ) | E ≥ ) P ( E ≥ )+ E ( χ j ( t + δt ) X ( t + δt ) | E ∗ ) P ( E ∗ ) . By definition, E ( χ j ( t + δt ) X ( t + δt ) | E ∗ ) = 0 . Moreover,from Assumption 1, P ( E ≥ ) = o ( δt ) . Furthermore, condi-tional to E and E , χ j ( t + δt ) = χ j ( t ) , and conditional to E , there holds X ( t + δt ) = X ( t ) . Hence, E ( χ j ( t + δt ) X ( t + δt ) | E ) = E ( X ( t ) | n ( t ) = j ) = X j ( t ) , and from Poisson properties P ( E ) = π j ( t ) − (cid:88) (cid:15) : s ( (cid:15) = j λ (cid:15) δt + o ( δt ) . Finally, there holds E ( χ j ( t + δt ) X ( t + δt ) | E ) P ( E )= (cid:88) (cid:15) : a ( (cid:15) )= j E ( X ( t + δt ) | (cid:15) ) P ( (cid:15) ) . Using (17), E ( X ( t + δt ) | (cid:15) ) = A (cid:15) E ( X ( t ) | n ( t ) = s ( (cid:15) )) + b (cid:15) . Moreover, P ( (cid:15) ) = π s ( (cid:15) ) ( t ) λ (cid:15) δt + o ( δt ) . Hence, combining everything together gives X j ( t + δt ) π j ( t + δt ) = π j ( t ) − (cid:88) (cid:15) : s ( (cid:15) )= j λ (cid:15) δt X j ( t )+ (cid:88) (cid:15) : a ( (cid:15) )= j λ (cid:15) (cid:0) A (cid:15) X s ( (cid:15) ) + b (cid:15) (cid:1) π s ( (cid:15) ( t ) δt + o ( δt ) . Using the fact that E ( X ( t )) is bounded provided any eventsequence, and taking the limit t → ultimately leads to theconclusion. D. Proof of Lemma 6 (Adversarial events)
One can show that provided n values x j , then ¯ x = arg min y n (cid:88) j =1 ( x j − y ) . Hence, an alternative formulation for the variance is given byVar ( x ) = 1 n n (cid:88) j =1 ( x − ¯ x ) = 1 n min n (cid:88) j =1 ( x j − y ) . For simplicity, assume that the agents presently in thesystem are labelled form to n . Assume moreover that ata departure, the agent that leaves is labelled n . Denoting ˜ x the state of the system after the departure, there holdsVar (˜ x ) = 1 n − y n − (cid:88) j =1 ( x j − y ) ≤ n − n − (cid:88) j =1 ( x j − ¯ x ) ≤ n − n (cid:88) j =1 ( x j − ¯ x ) , where the nonnegativity of ( x n − ¯ x ) is used for the lastinequality. Taking the expected value of the above result leadsto (29), the first result of Lemma 6.Let us now denote by x (cid:48) the state of the system afteran arrival that instantaneously follows the departure. Then,applying Lemma 3 to the system at state ˜ x , it follows, E Var ( x (cid:48) ) = E x (cid:48) − E ¯ x (cid:48) = n − n E ˜ x − ( n − n E ¯˜ x + (cid:18) n − n (cid:19) σ = n − n E Var (˜ x ) + n − n E ¯˜ x + n − n σ . Applying the previous result to bound E Var (˜ x ) , and thenLemma 1 to bound E ¯˜ x , it follows E Var ( x (cid:48) ) ≤ E Var ( x ) + 1 n σ + n − n σ = E Var ( x ) + 1 n σ , which concludes the proof for the second result (30) ofLemma 6. Charles Monnoyer de Galland is a PhD studentat UCLouvain, in the ICTEAM Institute, under thesupervision of Professor Julien M. Hendrickx since2018. He is a FRIA fellow (F.R.S.-FNRS).He obtained an engineering degree in appliedmathematics (2018), and started a PhD in math-ematical engineering the same year at the sameuniversity. His research interests are centered aroundthe analysis of open multi-agent systems.
Samuel Martin received the Dip. Ing. from theEcole Nationale Sup´erieure d’Informatique et deMath´ematiques Appliqu´ees de Grenoble, France, in2009 and the Ph.D. degree in applied mathematicsfrom the Universit´e de Grenoble, France, in 2012.From January 2013 to August 2013, he was aPostdoctoral Researcher at the Large Graph andNetworks Group, Universit´e Catholique de Louvain,Belgium. Since September 2013, he has been an As-sistant Professor at CRAN, Universit´e de Lorraine,France. His research interests include mutli-agentsystems, consensus theory and applications to opinion dynamics and socialnetworks.