Randomized Constraints Consensus for Distributed Robust Linear Programming
Mohammadreza Chamanbaz, Giuseppe Notarstefano, Roland Bouffanais
RRandomized Constraints Consensus forDistributed Robust Linear Programming (cid:63)
Mohammadreza Chamanbaz ∗ , ∗∗∗ Giuseppe Notarstefano ∗∗ Roland Bouffanais ∗∗∗∗
Arak University of Technology, Arak, Iran
[email protected] ∗∗ Department of Engineering, Universit`a del Salento, Lecce, Italy [email protected] ∗∗∗
Singapore University of Technology and Design, Singapore { Chamanbaz,bouffanais } @sutd.edu.sg Abstract:
In this paper we consider a network of processors aiming at cooperatively solvinglinear programming problems subject to uncertainty. Each node only knows a common costfunction and its local uncertain constraint set. We propose a randomized, distributed algorithmworking under time-varying, asynchronous and directed communication topology. The algorithmis based on a local computation and communication paradigm. At each communication round,nodes perform two updates: (i) a verification in which they check—in a randomized setup—therobust feasibility (and hence optimality) of the candidate optimal point, and (ii) an optimizationstep in which they exchange their candidate bases (minimal sets of active constraints) withneighbors and locally solve an optimization problem whose constraint set includes: a sampledconstraint violating the candidate optimal point (if it exists), agent’s current basis and thecollection of neighbor’s basis. As main result, we show that if a processor successfully performsthe verification step for a sufficient number of communication rounds, it can stop the algorithmsince a consensus has been reached. The common solution is—with high confidence—feasible(and hence optimal) for the entire set of uncertainty except a subset having arbitrary smallprobability measure. We show the effectiveness of the proposed distributed algorithm on amulti-core platform in which the nodes communicate asynchronously.
Keywords:
Distributed Optimization, Randomized Algorithms, Robust Linear Programming,Optimization and control of large-scale network systems, Large scale optimization problems.1. INTRODUCTIONRobust optimization plays an important role in severalareas such as estimation and control and has been widelyinvestigated. Its rich literature dates back to the 1950s,see Ben-Tal and Nemirovski (2009) and references therein.Very recently, there has been a renewed interest in thistopic in a parallel and/or distributed framework. In Leeand Nedi´c (2013), a synchronous distributed random pro-jection algorithm with almost sure convergence is proposedfor the case where each node has independent cost functionand (uncertain) constraint. Since the distributed algorithmrelies on extracting random samples from an uncertainconstraint set, several assumptions on random set, networkstructure and agent weights are made to prove almost sureconvergence. The synchronization of update rule relies ona central clock to coordinate the step size selection. Tocircumvent this limitation the same authors in Lee andNedi´c (2016) present an asynchronous random projectionalgorithm in which a gossip-based protocol is used to (cid:63)
This work is supported by the European Research Council (ERC)under the European Union’s Horizon 2020 research and innovationprogramme, grant agreement No 638992 - OPT4SMART, (GN) andby a grant from the Singapore National Research Foundation (NRF)under the ASPIRE project, grant No NCR-NCR001-040 (MC&RB). desynchronize the step size selection. The proposed algo-rithms in (Lee and Nedi´c, 2013, 2016), require computingprojection onto the constraint set at each iteration which iscomputationally demanding if the constraint set does nothave a simple structure such as half space or polyhedron.In Carlone et al. (2014), a parallel/distributed scheme isconsidered for solving an uncertain optimization problemby means of the scenario approach (Calafiore and Campi,2004). The scheme consists of extracting a number of sam-ples from the uncertain set and assigning them to nodes ina network. Each node is assigned a portion of the extractedsamples. Then, a variant of the constraints consensus al-gorithm introduced in Notarstefano and Bullo (2011) isused to solve the deterministic optimization problem. Asimilar parallel framework for solving convex optimizationproblems with one uncertain constraint via the scenarioapproach is considered in You and Tempo (2016). In thissetup, the sampled optimization problem is solved in adistributed way by using a primal-dual subgradient (resp.random projection) algorithm over an undirected (resp.directed) graph. We remark that in Carlone et al. (2014);You and Tempo (2016), constraints and cost function ofall agents are identical. In B¨urger et al. (2014), a cut-ting plane consensus algorithm is introduced for solvingconvex optimization problem where constraints are dis- a r X i v : . [ m a t h . O C ] J un ributed to the network processors and all processors havecommon cost function. In the case where constraints areuncertain, a worst-case approach based on a pessimizingoracle is used. The oracle relies on the assumption thatconstraints are concave with respect to uncertainty vectorand the uncertainty set is convex. A distributed schemebased on the scenario approach is introduced in Margelloset al. (2016) in which random samples are extracted byeach node from its local uncertain constraint set and adistributed proximal minimization algorithm is designedto solve the sampled optimization problem. The num-ber of samples required to guarantee robustness can belarge if the probabilistic levels defining robustness of thesolution—accuracy and confidence levels—are stringent,possibly leading to a computationally demanding sampledoptimization problem at each node.The main contribution of this paper is the design of a fullydistributed algorithm to solve an uncertain linear programin a network with directed and asynchronous communica-tion. The problem under investigation is a linear programin which the constraint set is the intersection of localuncertain constraints, each one known only by a singlenode. Starting from a deterministic constraint exchangeidea introduced in Notarstefano and Bullo (2011), thealgorithm proposed in this paper introduces a random-ized, sequential approach in which each node: (i) locallyperforms a probabilistic verification step (based on a localsampling of its uncertain constraint set), and (ii) solves alocal, deterministic optimization problem with a limitednumber of constraints. If suitable termination conditionsare satisfied, we are able to prove that the nodes agreeon a common solution which is probabilistically feasibleand optimal with high confidence. As compared to theliterature above, the proposed algorithm has three mainadvantages. First, no assumptions are needed on the prob-abilistic nature of the local constraint sets. Second, eachnode can sample locally its own uncertain set. Thus, nocentral unit is needed to extract samples and no commonconstraint set needs to be known by the nodes. Third andfinal, nodes do not need to perform the whole sampling atthe beginning and subsequently solve the (deterministic)optimization problem. Online extracted samples are usedonly for verification, which is computationally inexpensive.The optimization is performed always on a number of con-straints that remains constant at each node and dependsonly on the dimension of the decision variable and on thenumber of node neighbors.The paper is organized as follows. In Section 2, we for-mulate the uncertain distributed linear program (LP).Section 3 presents our distributed sequential random-ized algorithm for finding a solution—with probabilisticrobustness—to the uncertain distributed LP. The proba-bilistic convergence properties of the distributed algorithmare investigated in Section 4. Finally, extensive numericalsimulations are performed in Section 5 to prove the effec-tiveness of the proposed methodology.2. PROBLEM FORMULATIONWe consider a network of processors with limited compu-tation and/or communication capabilities that aim at co-operatively solving the following uncertain linear program min θ c T θ subject to A Ti ( q ) θ ≤ b i ( q ) , ∀ q ∈ Q , i ∈ { , . . . , n } , (1)where θ ∈ Θ ⊂ R d is the vector of decision variables, q ∈ Q is the uncertainty vector, c ∈ R d defines theobjective direction, A i ( q ) ∈ R m i × d and b i ( q ) ∈ R m i ,with m i ≥ d , define the (uncertain) constraint set ofagent i ∈ { , . . . , n } . Processor i has only knowledgeof a constraint set defined by A i ( q ) and b i ( q ) and theobjective direction c (which is the same for all nodes). Eachnode runs a local algorithm and by exchanging limitedinformation with neighbors, all nodes converge to the samesolution. We want to stress that there is no (central) nodehaving access to all constraints. We make the followingassumption regarding problem (1). Assumption 1. (Non-degeneracy). The minimum point ofany subproblem of (1) with at least d constraints is uniqueand there exist only d constraints intersecting at theminimum point.We let the nodes communicate according to a time-dependent, directed communication graph G ( t ) = {V , E ( t ) } where t ∈ N is a universal time, V = { , . . . , n } is theset of agent identifiers and ( i, j ) ∈ E ( t ) indicates that i send information to j at time t . The time-varying set ofincoming (resp. outgoing) neighbors of node i at time t , N in ( i, t ) ( N out ( i, t )), is defined as the set of nodes from(resp. to) which agent i receives (resp. transmits) informa-tion at time t . A directed static graph is said to be stronglyconnected if there exists a directed path (of consecutiveedges) between any pair of nodes in the graph. For time-varying graphs we use the notion of uniform joint strongconnectivity formally defined next. Assumption 2. (Uniform joint strong connectivity).There exists an integer L ≥ (cid:18) V , (cid:83) t + L − τ = t E ( τ ) (cid:19) is strongly connected for all t ≥ q enters prob-lem (1) making it computationally difficult to solve. Infact, if the uncertainty set Q is an uncountable set, prob-lem (1) is a semi-infinite optimization problem involvinginfinite number of constraints. In general, there are twomain paradigms to solve an uncertain optimization prob-lem of form (1). The first approach is a deterministicworst-case paradigm in which the constraints are enforcedto hold for all possible uncertain parameters in the set Q . This approach is computationally intractable for caseswhere uncertainty does not appear in a “simple” form,e.g. affine, multi-affine, convex, etc. The second approachis a probabilistic approach where uncertain parameters areconsidered to be random variables and the constraints areenforced to hold for the entire set of uncertainty except asubset having an arbitrary small probability measure. Inthis paper, we follow a probabilistic approach and presenta distributed tractable randomized setup for finding asolution—with desired probabilistic properties—for theoptimization problem (1). Notation
The constraint set of agent i is defined by H i ( q ) . = [ A i ( q ) , b i ( q )] . hroughout this paper, we use capital italic letter, e.g. H i ( q ) . = [ A i ( q ) , b i ( q )] to denote a collection of half spacesand capital calligraphic letter, H i ( q ) to denote the setinduced by half spaces, i.e. H i ( q ) . = { θ ∈ R d : A i ( q ) ≤ b i ( q ) } . We note that, with this notation, if A = B ∪ C with B and C being collection of half spaces, then A = B ∩ C ,that is, the set induced by the union of constraint sets B and C is the intersection of B and C . Finally J ( H ) is thesmallest value of c T θ while θ ∈ H . The linear programspecific to each agent i ∈ V is fully characterized by thepair ( H i ( q ) , c ) (note that c defines the objective directionwhich is the same for all nodes).3. RANDOMIZED CONSTRAINTS CONSENSUSIn this section, we present a distributed, randomizedalgorithm for solving the uncertain linear program (LP)(1) in a probabilistic sense. First, recall that the solutionof a linear program of the form (1) can be identified byat most d active constraints ( d being the dimension of thedecision variable). This concept is formally characterizedby the notion of basis . Given a collection of constraints H , a subset B ⊆ H is a basis of H if the optimal costof the LP problem defined by ( H, c ) is identical to theone defined by (
B, c ), and the optimal cost decreases ifany constraint is removed from B . We define a primitive[ θ ∗ , B ] = Solve LP ( H, c ) which solves the LP problemdefined by the pair (
H, c ) and returns back the optimalpoint θ ∗ and the corresponding basis B .Note that, since the uncertainty set is uncountable, it isin general very difficult to verify if a candidate solution isfeasible for the entire set of uncertainty or not. We insteaduse a randomized approach based, on Monte Carlo sim-ulation, to check probabilistic feasibility. The distributedalgorithm we propose has a probabilistic nature consistingof two main steps: verification and optimization. The mainidea is the following. A node has a candidate basis andcandidate solution point. First, it verifies if the candidatesolution point belongs to its local uncertain set with highprobability. Then, it collects bases from neighbors andsolves an LP with its basis and its neighbors’ bases asconstraint set. If the verification step was not successful,the first violating constraint is also added to the problem.Formally, we assume that q is a random variable anda probability measure P over the Borel σ − algebra of Q is given. In the verification step each agent i generates M k i independent and identically distributed (i.i.d) randomsamples from the set of uncertainty q k i . = { q (1) , . . . , q ( M ki ) } ∈ Q M ki , according to the measure P , where k i is a local counterkeeping track of the number of times the verification stepis performed and Q M ki . = Q × . . . × Q ( M k i times). Usinga Monte Carlo algorithm, node i checks feasibility of thecandidate solution θ i ( t ) only at the extracted samples.If a violation happens, the first violating sample is usedas a violation certificate . In the optimization step, agent i transmits its current basis to all outgoing neighborsand receives bases from incoming ones. Then, it solvesan LP problem whose constraint set is composed of: i) a constraint constructed at the violation certificate (if itexists) ii) its current basis and iii) the collection of basesfrom all incoming neighbors. Node i repeats these two steps until a termination condition is satisfied, namely ifthe candidate basis has not changed for 2 nL +1 times, with L defined in Assumption 2. The distributed algorithm isformally presented in Algorithm 1. The counter k i counts Algorithm 1
Randomized Constraints Consensus
Input: ( H i ( q ) , c ) , ε i , δ i Output: θ sol Initialization:
Set k i = 1 , [ θ i (1) , B i (1)] = Solve LP ( H i ( ) , c ) Evolution: (i)
Verification: • If θ i ( t ) = θ i ( t − q viol = ∅ and goto (ii) • Extract M k i ≥ . . k i + ln δ i ln − ε i (2)i.i.d samples q k i = { q (1) k i , . . . , q ( M ki ) k i }• If θ i ( t ) ∈ H i ( q ( (cid:96) ) k i ) for all (cid:96) = 1 , . . . , M k i , set q viol = ∅ ; else, set q viol as the first sample forwhich θ i ( t ) / ∈ H i ( q viol ) • Set k i = k i + 1(ii) Optimization: • Transmit B i ( t ) to j ∈ N out ( i, t ) and acquireincoming neighbors basis Y i ( t ) . = ∪ j ∈N in ( i,t ) B j • [ θ i ( t + 1) , B i ( t + 1)] = Solve LP ( H i ( q viol ) ∪ B i ( t ) ∪ Y i ( t ) , c ) • If θ i ( t + 1) has not changed for 2 nL + 1 times and q viol = ∅ , return θ sol = θ i ( t + 1)the number of times the verification step is called. Weremark that if at some t the candidate solution has notchanged, that is θ i ( t ) = θ i ( t − θ i ( t −
1) hassuccessfully satisfied the verification step and q viol = ∅ at time t − Remark 1. (Asynchronicity). The distributed algorithmpresented in this section is completely asynchronous. In-deed, time t is just a universal time that does not needto be known by the nodes. The time-dependent jointlyconnected graph then captures the fact that nodes canperform computation at different speeds. Remark 2.
In the deterministic constraints consensus al-gorithm presented in Notarstefano and Bullo (2011), ateach iteration of the algorithm, the original constraint setof the node needs to be taken into account in the localoptimization problem. Here, we can drop this requirementbecause of the verification step.4. ANALYSIS OF RANDOMIZED CONSTRAINTSCONSENSUS ALGORITHMIn this section, we analyze the convergence properties ofthe distributed algorithm and investigate the probabilisticproperties of the solution computed by the algorithm.
Theorem 1.
Let Assumptions 1 and 2 hold. Given theprobabilistic levels ε i > δ i > i = 1 , . . . , n ,let ε = (cid:80) ni =1 ε i and δ = (cid:80) ni =1 δ i . Then, the followingstatements hold(i) Along the evolution of Algorithm 1, the cost J ( B i ( t ))at each node i ∈ { , . . . , n } is monotonically non-decreasing. That is, J ( B i ( t + 1)) ≥ J ( B i ( t )).ii) The cost J ( B i ( t )) for all i ∈ { , . . . , n } con-verges to a common value asymptotically. That is,lim t →∞ J ( B i ( t )) = ¯ J for all i ∈ { , . . . , n } .(iii) If the candidate solution of node i , θ i ( t ), has notchanged for 2 Ln + 1 communication rounds, all nodeshave a common candidate solution θ sol 1 .(iv) The following inequality holds for θ sol P M (cid:26) q ∈ Q M : P (cid:26) q ∈ Q : θ sol / ∈ n (cid:92) i =1 H i ( q ) (cid:27) ≤ ε (cid:27) ≥ − δ, where M is the cardinality of the collection of multi-samples of all agents.(v) Let B sol be the basis corresponding to θ sol . Thefollowing inequality holds for B sol P M (cid:26) q ∈ Q M : P (cid:26) q ∈ Q : J (cid:0) B sol ∪ H ( q ) (cid:1) > J ( B sol ) (cid:27) ≤ ε (cid:27) ≥ − δ where H ( q ) . = (cid:83) ni =1 H i ( q ) and M is the cardinalityof the collection of multisamples of all agents. Proof of first statement:
The set of constraints at time t + 1 consists of the nodecurrent basis B i ( t ), the collection of neighbors’ bases Y i ( t ) . = ∪ j ∈N in ( i,t ) B j and H i ( q viol ). Since the basis attime t , B i ( t ), is part of the constraint set for computing B i ( t + 1), J ( B i ( t + 1)) cannot be smaller than J ( B i ( t )). Proof of second and third statements:
Since the graph is uniformly jointly strongly connected, forany pairs of nodes u and v and for any t >
0, there existsa time-dependent path from u to v (Hendrickx, 2008)—a sequence of nodes (cid:96) , . . . , (cid:96) k and a sequence of timeinstances t , . . . , t k +1 with t ≤ t < . . . < t k +1 , such thatthe directed edges { ( u, (cid:96) ) , ( (cid:96) , (cid:96) ) , . . . , ( (cid:96) k , v ) } belongsto the directed graph at time instances { t , . . . , t k +1 } ,respectively—of length at most nL . We recall that n is thenumber of nodes and L is defined in Assumption 2. Con-sider nodes i and p . If (cid:96) ∈ N out ( i, t ), then J ( B i ( t )) ≤ J ( B (cid:96) ( t +1)) as the constraint set of node (cid:96) at time t +1is a superset of the constraint set of node i at time t .Iterating this argument, we obtain J ( B i ( t )) ≤ J ( B p ( t + nL )). Again since the graph is uniformly jointly stronglyconnected, there will be a time varying path of length atmost nL from node p to node i . Therefore, J ( B i ( t )) ≤ J ( B p ( t + nL )) ≤ J ( B i ( t + 2 nL )) . Two scenarios can happen proving respectively statements(ii) and (iii).If J ( B i ( t )) (cid:54) = J ( B i ( t + 2 nL )), then J ( B i ( t ))
0, there existsa time T η such that for all t ≥ T η , ¯ J i − J ( B i ( t )) ≤ η and ¯ J p − J ( B p ( t )) ≤ η . This implies that there exists a T η such that for all t ≥ T η , J ( B i ( t )) ≥ ¯ J i − η > ¯ J p .Additionally, since the objective function is increasing, itfollows that for any time instant t (cid:48) ≥ J ( B p ( t (cid:48) )) ≤ ¯ J p .Thus, for all t ≥ T η and all t (cid:48) ≥ J ( B i ( t )) > J ( B p ( t (cid:48) )) . (3)On the other hand, since the graph is uniformly jointlystrongly connected, there exists a time-varying path oflength at most nL from node i to node p . Therefore, forall t ≥ T η J ( B i ( t )) ≤ J ( B p ( t + nL )) . (4)However, (4) contradicts (3) proving that J = . . . = J n . Therefore, it must hold that lim t →∞ | J ( B i ( t )) − J ( B j ( t )) | → i, j ∈ V . This proves the secondstatement of the theorem.If J ( B i ( t )) = J ( B i ( t + 2 nL )) and considering the pointthat node p can be any node of the graph, then all nodeshave the same cost. That is, J ( B ( t )) = . . . = J ( B n ( t )).This combined with Assumption 1 proves the third state-ment of the theorem. That is, if the candidate solutionis not updated for 2 nL + 1 communication rounds, allnodes have a common solution and hence the distributedalgorithm can be halted. Proof of forth statement:
We first note that using (Chamanbaz et al., 2016, The-orem 1), (Calafiore and Dabbene, 2007, Theorem 3) and(Dabbene et al., 2010, Theorem 5.3) we can show that—atany iteration t —if the sample size is selected based on (2)and the verification step is successful, that is q viol = ∅ ,then P M (cid:8) q ∈ Q M : P { q ∈ Q : θ i ( t ) / ∈ H i ( q ) } ≤ ε i (cid:9) ≥ − δ i . We note that the above inequality is a centralized resultand holds only for the agent’s own constraint H i ( q ). Alsosince for θ sol , the verification has to be successful, then P M (cid:8) q ∈ Q M : P { q ∈ Q : θ sol / ∈ H i ( q ) } ≤ ε i (cid:9) ≥ − δ i . (5)We further remark that the sample bound (2) is obtainedby replacing k t − δ/ γ in (Chamanbaz et al., 2016,Eq. (10)) with ∞ , δ i and 1 . γ .Now, we are interested in bounding the probability bywhich θ sol / ∈ (cid:84) ni =1 H i ( q ), i.e. P M (cid:40) q ∈ Q M : P (cid:40) q ∈ Q : θ sol / ∈ n (cid:92) i =1 H i ( q ) (cid:41) ≤ ε (cid:41) . (6)In order to bound (6), we follow similar reasoning statedin Margellos et al. (2016). Define the following eventsBad i . = { θ sol / ∈ H i ( q ) , ∀ q ∈ Q } Bad . = { θ sol / ∈ n (cid:92) i =1 H i ( q ) , ∀ q ∈ Q } . quations (5) and (6) can be written in terms of the eventsBad i and Bad P M (cid:8) q ∈ Q M : P { Bad i } ≤ ε i (cid:9) ≥ − δ i (7) P M (cid:8) q ∈ Q M : P { Bad } ≤ ε (cid:9) (8)respectively. One can observe that θ sol / ∈ n (cid:92) i =1 H i ( q ) ⇒ ∃ i ∈ { , . . . , n } : θ sol / ∈ H i ( q ) . Hence, the event Bad can be written as the union of eventsBad i , i = 1 , . . . , n , that is Bad = Bad ∪ Bad ∪ . . . ∪ Bad n .Invoking Boole’s inequality (Comtet, 1974) (also known asBonferroni’s inequality), we have P { Bad } ≤ n (cid:88) i =1 P { Bad i } . (9)Replacing P { Bad } in (8) with the right hand side of (9)we obtain P M (cid:40) q ∈ Q M : n (cid:88) i =1 P { Bad i } ≤ ε (cid:41) = P M (cid:40) q ∈ Q M : n (cid:88) i =1 P { Bad i } ≤ n (cid:88) i =1 ε i (cid:41) ≥ P M (cid:40) n (cid:92) i =1 (cid:8) q ∈ Q M : P { Bad i } ≤ ε i (cid:9)(cid:41) ≥ − n (cid:88) i =1 P M (cid:8) q ∈ Q M : P { Bad i } > ε i (cid:9) ≥ − n (cid:88) i =1 δ i = 1 − δ. We remark that the third line of above equation comesfrom the fact that if P { Bad i } ≤ ε i , ∀ i = 1 , . . . , n then,one can ensure that (cid:80) ni =1 P { Bad i } ≤ (cid:80) ni =1 ε i . The forthline also is due to the fact that P { (cid:84) i A i } = 1 − P { (cid:83) i A ci } where A ci is the complement of the event A i . Proof of fifth statement:
We first note that if the solution θ sol is violated for asample q v from the set of uncertainty, that is, θ sol / ∈ (cid:84) ni =1 H i ( q v ), then J ( B sol ∪ H ( q v )) ≥ J ( B sol ) with H ( q v ) . = (cid:83) ni =1 H i ( q v ). However, due to Assumption 1, any sub-problem of (1) has a unique minimum point and hence, J ( B sol ∪ H ( q v )) (cid:54) = J ( B sol ). This argument combined withthe result of forth statement proves the fifth statement ofthe theorem. That is, the probability that the solution θ sol is no longer optimal for a new sample equals theprobability that the solution is violated by the new sample. (cid:4)
5. NUMERICAL SIMULATIONWe test the effectiveness of the distributed algorithm pre-sented in Section 3 through extensive numerical simula-tions. To this end, we generate random linear programs(LP)—with a large number of uncertain parameters—assigned to various nodes of the network. Each node isassigned an uncertain set of the form( A + A q ) T θ ≤ b, where A is a fixed (nominal) matrix and A q is an intervalmatrix—a matrix whose entries are bounded in given intervals—defining the uncertainty in the optimizationproblem. We follow the methodology presented in Dunhamet al. (1977) in order to generate A , b and the problemcost c such that the linear program is always feasible.In particular, elements of A are drawn from standardGaussian distribution (mean= 0 and variance= 1). The i -th element of b is define by b i = ( (cid:80) dj =1 A ij ) / . Theobjective direction c —which is the same for all the nodes—is also drawn from the standard Gaussian distribution. Thecommunication graph G is a random connected graph withfixed number of neighbors.A workstation with 12 cores and 48 GB of RAM is usedto emulate the network model. From an implementationviewpoint, each node executes Algorithm 1 in an inde-pendent Matlab environment and the communication ismodeled by sharing files between different Matlab envi-ronments. We use the linprog function of Mosek (Ander-sen and Andersen, 2000) to solve optimization problemsappearing at each iteration of the distributed algorithm.In Table 1, we change the number of nodes and neighborssuch that the graph diameter is always 4. The number ofconstraints in each node is also kept at 100. We set thedimension of decision variables to d = 5 and consider allelements of A q to be bounded in [ − . , . ε i and δ i ) are 0 . /n and10 − /n respectively, with n being the number of nodes(first column of Table 1). We report the average—overall nodes—number of times each node updates its basisand transmits it to the outgoing neighbors. It is assumedthat each node keeps the latest information received fromneighbors and hence, if the basis is not updated, there is noneed to re-transmit it to the neighbors. This also accountsfor the asynchronicity of the distributed algorithm. Wealso report the average—over all nodes—number of timesnode i performs the verification step, i.e., k i at conver-gence. This allows us to show that with a small numberof “design” samples used in the optimization step, nodescompute a solution with high degree of robustness. In orderto examine robustness of the obtained solution, we runan aposteriori analysis based on Monte Carlo simulation.To this end, we collect all the constraints across differentnodes in a single problem of form (1) and check—in acentralized setup—the feasibility of the obtained solutionfor 10 ,
000 random samples extracted from the uncertainset. The empirical violation (last column of Table 1) ismeasured by dividing the number of samples that violatethe solution by 10 , θ i ( t ) , ∀ i ∈ { , . . . , n } from θ sol along the distributed algorithm execution for aproblem instance corresponding to the last row of Table 1.It is observed that all the nodes converge to the samesolution θ sol .able 1. The average—over all nodes—number of times a basis is transmitted to the neighbors,average number of times verification is performed ( k i at the convergence) and empirical violationof the computed solution ( θ sol ) over 10 ,
000 random samples for different number of nodes andneighbors in each node. The simulation is performed 100 times for each row and average resultsare reported. k i at convergence Empirical n in each node diameter in each node (averaged) (averaged) violation10 3 4 100 29 .
57 31 .
69 2 . × −
20 4 4 100 26 .
92 29 .
02 1 . × −
50 6 4 100 26 .
47 28 .
51 7 × −
100 7 4 100 26 .
63 28 .
68 2 . × − Universal Time(Second) -2.5-2-1.5-1-0.5 O b j ec t i ve V a l u e Universal Time(Second) D i s t a n ce t o θ s o l Fig. 1. Objective value and distance to θ sol for all thenodes in the network corresponding to the last rowof Table 1. 6. CONCLUSIONSIn this paper, we proposed a randomized distributed al-gorithm for solving robust linear programs (LP) in whichthe constraint sets are scattered across a network of proces-sors communicating according to a directed time-varyinggraph. The distributed algorithm has a sequential natureconsisting of two main steps: verification and optimization.Each processor iteratively verifies a candidate solutionthrough a Monte Carlo algorithm, and solves a local LPwhose constraint set includes its current basis, the collec-tion of bases from neighbors and possibly, a constraint—provided by the Monte Carlo algorithm—violating thecandidate solution. The two steps, i.e. verification and op-timization, are repeated till a local stopping criteria is metand all nodes converge to a common solution. We analyzethe convergence properties of the proposed algorithm.REFERENCESAndersen, E. and Andersen, K. (2000). The mosek interiorpoint optimizer for linear programming: an implementa-tion of the homogeneous algorithm. In High performanceoptimization , 197–232. Springer.Ben-Tal, A.and El Ghaoui, L. and Nemirovski, A. (2009).
Robust optimization . Princeton University Press.B¨urger, M., Notarstefano, G., and Allg¨ower, F. (2014).A polyhedral approximation framework for convex androbust distributed optimization.
IEEE Transactions onAutomatic Control , 59, 384–395. Calafiore, G. and Dabbene, F. (2007). A probabilisticanalytic center cutting plane method for feasibility ofuncertain lmis.
Automatica , 43, 2022–2033.Calafiore, G. and Campi, M. (2004). Uncertain convexprograms: randomized solutions and confidence levels.
Mathematical Programming , 102, 25–46.Carlone, L., Srivastava, V., Bullo, F., and Calafiore, G.(2014). Distributed random convex programming viaconstraints consensus.
SIAM Journal on Control andOptimization , 52, 629–662.Chamanbaz, M., Dabbene, F., Tempo, R., Venkatara-manan, V., and Wang, Q.G. (2016). Sequential random-ized algorithms for convex optimization in the presenceof uncertainty.
IEEE Transactions on Automatic Con-trol , 61, 2565–2571.Comtet, L. (1974).
Sieve Formulas , 176–203. SpringerNetherlands, Dordrecht.Dabbene, F., Shcherbakov, P., and Polyak, B. (2010).A randomized cutting plane method with probabilisticgeometric convergence.
SIAM Journal on Optimization ,20, 3185–3207.Dunham, J., Kelly, D., and Tolle, J. (1977). Some Ex-perimental Results Concerning the Expected Numberof Pivots for Solving Randomly Generated Linear Pro-grams. Technical Report TR 77-16, Operations Re-search and System Analysis Department, University ofNorth Carolina at Chapel Hill.Hendrickx, J. (2008).
Graphs and networks for the analysisof autonomous agent systems . Ph.D. thesis, UniversitCatholique de Louvain, Louvain, Belgium.Lee, S. and Nedi´c, A. (2013). Distributed random projec-tion algorithm for convex optimization.
IEEE Journalof Selected Topics in Signal Processing , 7, 221–229.Lee, S. and Nedi´c, A. (2016). Asynchronous gossip-basedrandom projection algorithms over networks.
IEEETransactions on Automatic Control , 61, 953–968.Margellos, K., Falsone, A., Garatti, S., and Prandini, M.(2016). Distributed constrained optimization and con-sensus in uncertain networks via proximal minimization. arXiv preprint arXiv:1603.02239 .Notarstefano, G. and Bullo, F. (2011). Distributed ab-stract optimization via constraints consensus: Theoryand applications.
IEEE Transactions on AutomaticControl , 56(10), 2247–2261.You, K. and Tempo, R. (2016). Networked ParallelAlgorithms for Robust Convex Optimization via theScenario Approach. arXiv preprint arXiv:1607.05507arXiv preprint arXiv:1607.05507