Hybrid DCOP Solvers: Boosting Performance of Local Search Algorithms
aa r X i v : . [ c s . M A ] S e p Hybrid DCOP Solvers: Boosting Performance ofLocal Search Algorithms
Cornelis Jan van Leeuwen , and Przemyz law Pawe lczak TNO, Eemsgolaan 3, Groningen, The Netherlands [email protected] Delft University of Technology, Van Mourik Broekmanweg 6, Delft, TheNetherlands { p.pawelczak,c.j.vanleeuwen-2 } @tudelft.nl Abstract.
We propose a novel method for expediting both symmetricand asymmetric Distributed Constraint Optimization Problem (DCOP)solvers. The core idea is based on initializing DCOP solvers with greedyfast non-iterative DCOP solvers. This is contrary to existing methodswhere initialization is always achieved using a random value assignment.We empirically show that changing the starting conditions of existingDCOP solvers not only reduces the algorithm convergence time by upto 50%, but also reduces the communication overhead and leads to abetter solution quality. We show that this effect is due to structural im-provements in the variable assignment, which is caused by the spreadingpattern of DCOP algorithm activation.
Distributed Constraint Optimization Problems (DCOPs) are a method for for-malizing and solving problems that have a distributed nature, and in whichmultiple cooperating agents control discrete variables in order to optimize acommon problem. DCOPs can be found in many different domains such as sen-sor networks [5], mobile sensing team coordination [25], communication [26],home automation [23] and smart grid optimization [8]. The underlying structureof DCOPs is always the same: agents have to assign variables that not onlyoptimize a local set of constraints, but have to send messages to other agentsin order to cooperatively come up with variable assignments that are optimalfor the complete set of agents. A special kind of DCOPs can be formulatedwhere agents having a shared constraint may assign different costs for a valueassignment. These problems are called Asymmetric DCOPs (ADCOPs) [10]. InADCOPs the aspect of cooperation is even more important, since an assignmentleading to a local improvement may deteriorate the global performance.A variety of algorithms that find solutions for DCOPs can be classified as ei-ther complete or incomplete solvers [14]. Solvers such as ADOPT [19], DPOP [21]or AFB [9] are the algorithms of the complete type, and are used to find theoptimal solution. However, DCOPs are NP-hard [18] so a solution becomes in-tractable for large scale problems. Therefore incomplete solvers use heuristic Cornelis Jan van Leeuwen and Przemyz law Pawe lczak approaches to find a solution which may be suboptimal, but reaching the solu-tions much faster. In other words, there is a trade-off between solution qualityand speed.In this paper we introduce a new class of incomplete DCOP solvers, that com-bine features of different DCOP solvers and combine them into hybrid solvers .Specifically, we show that we can use different initialization methods for existingDCOP algorithms, which has a profound impact on their overall performance.In order to do so we show that using different initialization methods that are not iterative in approach, and are hence very fast in converging to a solution,we can reduce algorithm running times and improve the solution quality.In the evaluation of the proposed hybrid DCOP solvers, we will considerexisting symmetric and asymmetric solvers, as well as a new algorithm which isan extension of ACLS, which we will refer to as ACLS-UB.
Before introducing our proposed novel class of hybrid DCOP solver we need tostart with providing the definition of (A)DCOPs. DCOPs are problems from thefield of multi-agent systems in which agents can reason and send messages toone another to cooperatively decide on their variable assignments in order tofind a solution to a global cost minimization function.
Problem Formalization and Notation
Following the notation from [9], DCOPsare defined as a tuple T = hA , X , D , Ri where A is a finite set of agents { A , A , . . . , A n } and X is the set of variables { X , X , . . . , X n } with finite dis-crete domains { D , D , . . . , D n } from the set of domains D such that X i ∈ D i .Furthermore we require that each agent A i is assigned one corresponding vari-able, X i , and therefore |A| = |X | = |D| . This one-on-one relation between agentsand variables, is seen in many DCOP studies, but is not strictly required. Then, R is a set of relations or constraints between variables, in which each constraint C ∈ R defines a non-negative cost depending on the value assignment of theinvolved variables. Every possible value assignment of a set of variables hasa particular induced global cost C : D i × D i × . . . × D i k → R ≥ , while forADCOPs each constraint defines a set of costs for every involved variable, i.e. C : D i × D i × . . . × D i k → R k ≥ . Having all definitions of T , in (A)DCOPs thegoal of the agents is to minimize the global cost function, i.e.arg min X X R . (1)In the rest of this paper, as for most DCOP studies, we shall only take intoaccount binary constraints, in which exactly two variables are involved for everyconstraint, which is then of the form C i,j : D i × D j → R ≥ . Definitions
We refer to agents as neighbors if there exists a constraint betweenthe corresponding variables. This follows the real-life situation of limited range ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 3 between cooperative agents, e.g. communication range in wireless networks. Theset of all neighbors of an agent M i ⊆ A is called the neighborhood . The setˆ X i denotes the set of known assigned values of the neighbors of A i and is alsoreferred to as the current partial assignment (CPA). Note that the constraintsbetween variables can be depicted as an undirected graph. In this paper we propose a new class of DCOP solvers in which we propose asimple, yet an effective idea. We take the best of different types of existing DCOPsolvers, forming a new class of hybrid
DCOP solvers. Particularly, we proposeto improve the performance of existing local search algorithms by modifying theinitialization methods to find the initial value assignment. In the field of Con-straint Satisfaction Problems, which is closely related to DCOPs, the approachof using a initialization and repairing it is well known, and can yield great ben-efits [17]. From the fields of evolutionary algorithms [22], clustering [2], neuralnetworks [24,4] and meta-learning [7] we know that initialization methods canhave a great effect on the performance of an algorithm. However to the bestof our knowledge, there has been little to no work on the effects of differentinitialization methods for (A)DCOPs. In this paper we will study the effect dif-ferent initialization methods may have on the performance of (existing) DCOPalgorithms.
Hybrid DCOP Solver:
We define a hybrid DCOP solver as a solver whichexecutes sequentially other (existing) DCOP solvers. Selection of which
DCOPmethod to use in the next DCOP solving iteration and when to switch to a newDCOP solver method is a core of the hybrid solver definition.
Most (if not all) local search DCOP algorithms use an initial random assignmentfor all of the variables, which will be iteratively improved upon. Instead, aninitial assignment can be computed by a non-iterative DCOP algorithm, such asa simple greedy algorithm, or a more elaborate greedy algorithm such as the oneintroduced in [12]. Since these methods will assign a value only once, and thenterminate, they quickly provide a good initialization assignment from which onecan start another DCOP method.We hypothesize that the combination of different initialization methods foriterative algorithms in DCOP solution search, will be beneficial because of twoeffects:1.
Solution quality improvement over initial assignment : most probably a sim-ple initialization method will find a sub-optimal solution, and many localsearch algorithms will be able to improve it. Algorithms that are known toprovide monotonically decreasing solution costs (any algorithm that uses a
Cornelis Jan van Leeuwen and Przemyz law Pawe lczak coordinated change approach, e.g. MGM-2, ACLS, MCS-MGM) are guar-anteed to find better (or equal) solutions compared with the initial valueassignment; and2.
Increased convergence speed for local search algorithms : DCOP algorithmsthat use local search will most likely converge faster when a good solutionis used for initial DCOP value assignment. This will lead to a shorter totalrunning time for algorithms that are initialized with a better assignment.
In our experiments we combine different initialization methods with existing localsearch algorithms which we shall also refer to as iterative methods . Since the aimof this study is to improve the solution quality and convergence speed of solvers,we do not take into account complete solvers. To understand why only certaincombination of DCOP solvers improves the solution, we need first to classify (i)initialization methods, and (ii) types of DCOP iterative solvers.
A de facto standard method for all DCOP solvers. It does not takeinto account any constraints and starts with the random variable assignment. – k Step Look-ahead:
We define a look-ahead initialization algorithm asthe one in which one randomly chosen initial node is triggered first, andonly after it has chosen a value it will activate its neighbors. When choosinga value, it takes into consideration the effect on all of its neighbors thatare reachable within k steps (edges or hops). Three special cases of k steplook-ahead are already known and described in the literature: • Zero Step Look-ahead (ZSLA):
A zero step look-ahead algorithm( k = 0) is the one in which an agent optimizes only for the constraintsit is directly involved in. Such algorithm are also referred to as greedy,breadth-first algorithms; • Single Step Look-ahead (SSLA):
A single-step-look-ahead algorithm( k = 1) is defined as the one in which an agent optimizes not only forthe constraints it is involved in, but also the constraints its one-hopneighbors are in. One such algorithm is the recently proposed CoCoAalgorithm [12] and its variants CoCoA UF and CoCoA WPT [13]; • Max Step Look-ahead (MSLA): If k is equal to the height of thegraph’s minimal spanning tree, and the algorithm would be started at theroot of the spanning tree, the algorithm becomes a complete algorithm,and is in fact equivalent to DPOP [21]. Classifying iterative methods used in DCOP solvers we can divide them into twomain groups: ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 5 – Symmetric DCOP Solvers:
Those include DSA [27], MGM and MGM2 [15]and generalized DBA [20]; – Asymmetric DCOP Solvers:
Those include MCS-MGM [11], and ACLS [10]with its new version ACLS-UB (which is also a novel contribution of thiswork and described in Section 4.3).
Remark on Max-Sum:
We are naturally aware of another popular DCOP solver:the Max-Sum algorithm [6] or any one of its variants. However Max-Sum isunable to utilize the benefit of initializing, as it tries to approximate the globalutility of any value, and uses this to determine the best variable assignment.There are extensions of Max-Sum that are able to build upon an an initialassignment by using value propagation [29]. In a recent paper [3] the effectof initialization was studied in variant called Max-Sum ADSSVP. The authorsfind that the timing, and approach to initialization has a great effect on theperformance, both in terms of solution quality and convergence speed. For ourevaluation however, we leave Max-Sum out of the comparison, and refer to thispaper to provide a complete overview of different hybrid algorithms.
In addition to existing DCOP algorithms listed above we introduce a variant ofthe ACLS, denoted as
Unbiased (ACLS-UB).
ACLS-UB Algorithm:
In the original ACLS algorithm [10], at every iteration anagent chooses a variable assignment that would lower its local costs and proposesit as a new value to its neighbors. Neighbors respond with the effect on theirside, after which the proposition which has the best effect on the regional costfunction is selected. In the ACLS-UB algorithm a value assignment is proposedfrom all possible values, instead from the subset that improves its local state.The ACLS-UB algorithm is described using pseudo code in Algorithm 1.ACLS-UB works by iteratively proposing a random value from its domain D i , and sends that value to its neighbors. The neighbors respond by sending theeffect of the assignment on their local costs, taking into account all known valueassignments. When these local effects are received by the initial agent, it sumsover all received effects, and assigns the proposed value with probability p onlyif it will reduce the current local cost. Relation of ACLS-UB to Other Solvers:
The main difference between ACLS andACLS-UB is in line 4 of Algorithm 1, where any random value is picked fromthe domain. In the long run, the effect of this pick is that the effect of all valuesfrom the domain are used to retrieve the induced effect on the neighbors’ localcost.Intuitively ACLS-UB works very similar to CoCoA [12,13], with the majordifference that CoCoA operates in one single iteration instead of iteratively try-ing different values. Another difference is that in ACLS-UB the neighbors willsend back the value of the constraint cost, whereas CoCoA will send back the
Cornelis Jan van Leeuwen and Przemyz law Pawe lczak
Algorithm 1
ACLS-UB Algorithm On A i when activated1: X i ← chooseRandomValue()2: while (no termination condition is met) do
3: send X i to ∀ A j ∈ M i υ ← chooseRandomValue()5: send υ to ∀ A j ∈ M i
6: wait for incoming constraint cost δ j from A j ∆ i ← P j ∈M i j δj if ∆ i < currentCost and random[0 , < p then
9: assign X i ← υ end if end while On A j when receiving υ from A i
12: send constraint cost δ j for X i = υ lowest induced cost for any assignment in conjunction with the proposed valueand the CPA. This extra look-ahead is not efficient in ACLS-UB, since the next-hop neighbors will in fact already have an assignment, and the lowest cost willbe too optimistic. Note that the unique-first approach of CoCoA is not requiredin ACLS-UB, as it can easily recover from any earlier suboptimal assignmentsin later iterations, whereas CoCoA cannot. In order to understand whether there is any benefit from hybrid solvers, weperformed the following experiments . For any problem, we initiate 200 probleminstances which are initialized by three methods: random , ZSLA (i.e. greedy) and
SSLA (i.e. CoCoA) and subsequently solved by other DCOP solvers (dependingon the experiment). We report the average result of all problems. We assume asolver has converged when no better solutions have been found for more than100 iterations, and define the moment of “convergence” as the first iterationin which the solution was within 1% of the minimal solution. In this way wecan compare the convergence speed of different algorithms, and do not have tospecify the number of iterations beforehand, in a way similar to the any-timesolution as proposed in [28].As performance metrics we will score solvers on the following metrics: – The number of iterations required to converge , denoted as I; – Final cost of the solution after the algorithm converged , denoted asS; For reproducibility and validation of our results, all (Java) code for the algorithms isavailable at https://github.com/coenvl/jSAM/tree/OptMAS18 , and for the exper-imental setups (MATLAB) at https://github.com/coenvl/mSAM/tree/OptMAS18 .ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 7 . . . . . . . . . . . Running time (s) S o l u t i o n c o s t RandomZSLASSLA
Fig. 1.
In a three-color graph coloring experiment, when using an SSLA for initializa-tion, the MGM-2 algorithm has a great benefit in speed and solution quality.
Table 1.
Graph coloring experiment resultsAlgorithm I S M ∗ E ∗ TRandom DSA 157 49 10.4 362.9 3.5ZSLA DSA 164 49 10.7 381.6 3.7SSLA DSA 115 47 × – Number of messages that are transmitted during the run , denotedas M; – Number of constraint evaluations , denoted as E; and – Running time until the moment of convergence , denoted as T inseconds.The constraint evaluations are indicative of the computational complexity andcan also be referred to as Non Concurrent Constraint Checks (NCCC)s [16]. We use a (symmetric) graph-coloringproblem with three colors, which have to be assigned to 200 variables. Theconstraints between the variables are chosen as the nodes were connected viaa Delaunay triangulation, when the variables are points chosen randomly on atwo-dimensional plane. The results of MGM-2 ( p = 0 .
5) is shown in Figure 1, of
Cornelis Jan van Leeuwen and Przemyz law Pawe lczak · Running time (s) S o l u t i o n c o s t RandomZSLASSLA
Fig. 2.
In a semi-random experiment (Section 5.1, Experiment 2), when using an SSLAfor initialization, the ACLS algorithm shows faster convergence to a better solution.
Table 2.
Semi-randomized asymmetric experiment resultsAlgorithm I S ∗ M ∗ E ∗ TRandom ACLS 95 26.4 151 1321 4.3ZSLA ACLS 75 26.1 121 995 3.5SSLA ACLS 48 24.2 107 10571
Random ACLSUB 333 24.7 531 5747 14.8ZSLA ACLSUB 299 24.1 477 5164 13.3SSLA ACLSUB 207 23.5 358 12963 11.9Random MCSMGM 1154 21.7 1389 5557 55.1ZSLA MCSMGM 1022 21.6 1232 4935 48.8SSLA MCSMGM 781 22.0 970 13599 40.0* = × which the numeric results are shown in Table 1 together with DSA (variant C,with p = 0 . Experiment 2: Asymmetric DCOP
An asymmetric problem is chosen wherethe constraints are created using a scale-free graph generation method [1], andare assigned semi-random asymmetric costs such that there is a high probabilitythat a conflict of interests occurs. This problem is created specifically to bench-mark asymmetric problems, and is described in more detail in [10, Section 5.2].The result of the ACLS ( p = 0 .
5) algorithm is shown in Figure 2, and the resultsof all algorithms is presented in Table 2.
Hybrid Solvers—Discussion of Initial of Results:
Based on the results from thefirst two experiments, we see that local search DCOP solvers reduced their ex- ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 9 ecution time and communication overhead using initialization methods otherthan random, but surprisingly also found a final solution with a lower cost. Theonly two exceptions (marked in bold in Table 1 and 2) are (i) when using theSSLA with DSA, in which case the messages of the SSLA increases the verylow communication overhead of DSA, and (ii) when using an SSLA with ACLS,in which case the added run time of the SSLA increases the convergence time.The speed performance gain can be easily explained: a better initialization willreduce the amount of variable “tweaking” needed. However, how come the finalsolution is also lower and solution result dependent on the initialization method?
We introduce three hypotheses to explain this phenomenon:1.
Hypothesis 1:
A lower initial solution will always lead to a lower finalsolution;2.
Hypothesis 2:
Using initialization methods other than random increasesthe explored portion of the solution space;3.
Hypothesis 3:
The initialization method itself finds a starting point in thesearch space, from which a lower local minimum is reachable.Let us experimentally verify these three hypotheses in detail.
Verifying Hypothesis 1: Solution Cost Correlation
The SSLA algorithmfinds a lower initial cost than the ZSLA initializer, which in turn finds a lowercost than a random assignment. Hence the first hypothesis is that a lower initialcosts will (on average) lead to lower final costs. The initial state is known to beof great influence on the final solution, and a correlation between the initial costand the final cost could explain why ZSLA or SSLA initialization methods leadto better final solutions.To test this hypothesis we performed an experiment by repeatedly invokingthe algorithms on the exact same problem setup, but with different randominitializations. We gather information on the cost at initialization and of theeventual outcome. In Figure 3 we show the minimum, average, and maximumresults of 200 instantiations of the algorithms solving the exact same three colorgraph coloring problem with a Delaunay graph of size n = 100. For this smallexperiment we only compare the DSA algorithm instantiated with a randomizedapproach with an algorithm that uses CoCoA WPT for initialization.From this experiment we see that for some iterations we do find a solutionwith the random strategy which is as good as the SSLA-initialized solution,however on average the final solution is worse. The spread of the initial and thefinal solution corresponds with the statistical spread of the random assignments,and some random initialization lead to better final solutions than others. Ifwe look at the correlation between the initial cost and the final cost of theindividual runs, then we find that there is no correlation between the cost of theinitial random assignment and the final minimal cost. The Pearson correlation . . . . . . . . . . . Running time (s) S o l u t i o n c o s t Random - averageRandom - minimumRandom - maximumSSLA - averageSSLA - minimumSSLA - maximum
Fig. 3.
Starting the DSA algorithm from a different starting point will lead to a dif-ferent outcome. This graph shows the minimum, average and maximum solution costsduring the experiments. coefficient between the solution cost at the beginning and the end is 0.15. Withthese results we reject hypothesis 1 . Verifying Hypothesis 2: Increased Solution Space Exploration
A DCOPproblem is generally a matter of solution space exploration. The solvers thatare capable of effectively searching a larger portion of the solution space, willreasonably find a better final solution than solvers that cannot. Since local searchalgorithms search only a small fraction of the search space, the increase in searchspace exploration by a SSLA may be of large influence. Put differently, a betteroverview of the trends in the solution space may lead to insights as to where thebest optimum lies. If we can show that the solvers using SSLA explore a largerpart of the solution space than randomly initialized solvers, this may explainwhy they find solutions with the lower final cost.To verify this, we construct an experiment in which we captured the CPA every time a constraint check is performed, so that we can store every exploredvalue assignment. We did observe that SSLA searches a slightly larger portionof the solution space than ZSLA, which in turn searches a larger part of thesolution space than random. However, the SSLA algorithm sometimes searchesa smaller part of the solution space, largely because its successor algorithmconverges so quickly. Therefore, with these results we reject hypothesis 2 as well,also because the differences are so marginally small that they cannot explain thesignificant effect on the results.
Verifying Hypothesis 3: Selection of Starting Point
The initial assign-ment determines the area of the solution space that is reachable through localsearch. It is possible that SSLA is capable of finding initial assignments that have ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 11012345 6 7 8 9 1011 12 13 141516171819
Fig. 4.
An example graph in which two dense clusters of nodes are connected by asingle bridge. relatively good local minima. To explain what we mean by this, let us sketch thefollowing example.
Definition 1. A bridge edge is defined as an edge that, when removed fromthe graph, the graph will no longer be connected. Suppose we have a graph with high modularity, meaning it consists of clus-ters of densely connected nodes, which are connected through relatively lownumber of bridge edges. The nodes on these bridges could initially induce a highperformance penalty; and it may be impossible for a local search algorithm to es-cape from these expensive assignments because of the many low cost constraintsaround it on the surrounding nodes.As a minimal example suppose we have a three color graph coloring problemwith a graph as shown in Figure 4. As we see node 0 and 10 have a constraintwith many other nodes, and are connected to one another. If through someunfortunate random assignment, they are both given the same initial color (forexample red) and the nodes around it are mostly other colors, then no localsearch algorithm will change that initial assigned color. This is confirmed throughexperiments in which we use the graph as depicted in Figure 4, letting thealgorithms solve the graph-coloring problem. One group of agents is hardwiredto intialize with an initialization in which X = X , and ∀ i =10 X i = X . Aswe can see in Figure 5, the local search algorithm is unable to find a solutionin which this constraint is resolved. However, we can guarantee that an SSLAalgorithm will never assign the same color to endpoint vertices of a bridge, andwill thus lead to better solutions. Proposition 1.
A SSLA will never assign the same color to the endpoints of abridge.Proof. ( Sketch ) When an SSLA starts, any random agent is activated first. Ifeither bridge endpoint is selected first, then logically they will not be activatedsimultaneously. If any other node is selected, then this node will execute thealgorithm and select a random initial assignment. After that it activates allof its neighbors, which will execute the algorithm, until at some iteration the .
00 0 .
05 0 .
10 0 .
15 0 .
20 0 .
25 0 . Running time (s) S o l u t i o n c o s t UnfortunateRandomZSLASSLA
Fig. 5.
The results of the MCS-MGM algorithm trying to solve the graph coloring prob-lem in graph from Figure 4, with various initialization strategies. The “ unfortunate ”strategy is hardwired to get a conflict on the bridge. first bridge endpoint is selected. At this moment the other endpoint cannot beactivated, or the edge would not have been a true bridge.Because the algorithm is active in one bridge endpoint, but never in bothat the same time, one must assign a value before the other. Moreover, when anendpoint of the bridge eventually has to assign a value, no nodes from the othercomponent can be assigned a value yet, because the bridge is the only connectingedge. Therefore, when the second bridge endpoint is activated it will only havethe first endpoint as a constraining value, and will thus always pick a differentvalue.Although this exact order of events will not hold for pseudo-bridges thatconnect clusters within a graph, there will be an ordering in which the nodeswill be activated, as long as the detour path between the vertices on the pseudo-bridge is longer than three. In many graphs with high modularity, the values ofnodes in the bridging constraints will therefore be chosen with low costs, and thecoloring within the clusters can simply be permuted. Therefore, the local searchalgorithms that continue from these solutions are generally of higher quality,than those from random initial assignments.If this final hypothesis is true, then we expect some different results in theperformance of different types of graphs, especially for various densities. Wewould expect the benefit of an SSLA to decrease with problem graphs of higherdensities, since in these graphs bridges or pseudo-bridges occur less frequently.
In our final experiment we use once more the graph coloring problem with | D | =3, and instantiate randomly connected graphs with n = 200 with nine varying y b r i d D C O P S o l v e r s : B oo s t i n g P e r f o r m a n c e o f L o c a l S e a r c h A l go r i t h m s . . . . Density: 0.01
Density: 0.03
Density: 0.05
RandomZSLASSLA0 2 4 6 8 10200250300350 S o l u t i o n c o s t Density: 0.07
Density: 0.10
Density: 0.15
Density: 0.20
Running time (s)Density: 0.25
Density: 0.30
Fig. 6.
The solution cost versus the time of the MGM-2 algorithm when solving random graphs of different densities shows the impactof initialization methods. Up to a density of 0 . densities between 0 .
01 and 0 .
3. We generate 50 graphs of every density, let thedifferent solver combinations (initialization and iteration) solve the same graphs,and report the average performance of all instances. The convergence criteriawere identical to the experiment described in Section 5.In Figure 6 we show the averaged results for the MGM-2 solver, when solvingthe graphs with different densities. We can indeed conclude that for graphs witha low to medium density (up to 0 . In this article we studied the effect of different initialization strategies on theperformance of different DCOP algorithms. Particularly, we introduced a newclass of hybrid algorithms which combine the strategies of SSLA algorithms withiterative local search algorithms. We found that using this combination not onlycombines the fast convergence of the SSLA with the eventual better solutionquality of the iterative approach, but that using the hybrid solver actually im-proves the quality of the final solution compared to using the iterative approachalone.Two possible hypotheses that could explain this observation were rejected: (i)better initializations do not necessarily lead to lower final solution costs, and (ii)using SSLA does not significantly increase the searched solution space. Instead,we hypothesize that using an SSLA (such as CoCoA) selects an initialization thatis in a region of the solution space which has a lower local minimum than thestatistical expected local minimum. This is caused by a reduction of conflictingvalues assigned on bridge vertices. In our final experiment we show that theeffect is most abundant on low density graphs, in which (pseudo-)bridges aremore present, and the solution cost of the search space is less homogenous.Our hybrid approach seems well suited for applications in which maximumperformance in terms of both convergence speed and solution cost is required.In fact we may use it as a general strategy for initial value assignment insteadof using random values in problems with low graph densities.
References
1. Albert, R., Barab´asi, A.L.: Statistical mechanics of complex networks. Reviews ofModern Physics (1), 47–98 (2002).2. Cao, F., Liang, J., Jiang, G.: An initialization method for the K-means algorithmusing neighborhood model (3), 474–483 (2009).ybrid DCOP Solvers: Boosting Performance of Local Search Algorithms 153. Chen, Z., Deng, Y., Wu, T.: An iterative refined max-sum AD algorithm via single-side value propagation and local search. pp. 195–202. S˜ao Paulo, Brazil (May 8–122017).4. Dolezel, P., Skrabanek, P., Gago, L.: Weight initialization possibilities for feedfor-ward neural network with linear saturated activation functions. In: Proceedingsof the Conference on Programmable Devices and Embedded Systems. pp. 49–54.IFAC, Brno, Czech Republic (Oct 5–7 2016).5. Farinelli, A., Rogers, A., Jennings, N.R.: Agent-based decentralised coordinationfor sensor networks using the max-sum algorithm. Autonomous Agents and Multi-Agent Systems (3), 337–380 (May 2014).6. Farinelli, A., Rogers, A., Petcu, A., Jennings, N.R.: Decentralised coordination oflow-power embedded devices using the max-sum algorithm. pp. 639–646. Estoril,Portugal (May 12–16 2008).7. Feurer, M., Springenberg, J.T., Hutter, F.: Initializing bayesian hyperparameteroptimization via meta-learning. pp. 1128–1135. AAAI, Austin, TX, USA (Jan 25–03 2015).8. Fioretto, F., Yeoh, W., Pontelli, E., Ma, Y., Ranade, S.: A distributed constraintoptimization (DCOP) approach to the economic dispatch with demand response.pp. 999–1007. S˜ao Paulo, Brazil (May 8–12 2017).9. Gershman, A., Meisels, A., Zivan, R.: Asynchronous forward bounding for dis-tributed COPs (1), 61–88 (Jan 2009).10. Grinshpoun, T., Grubshtein, A., Zivan, R., Netzer, A., Meisels, A.: Asymmetricdistributed constraint optimization problems , 613–647 (May 2013).11. Grubshtein, A., Zivan, R., Grinshpoun, T., Meisels, A.: Local search for distributedasymmetric optimization. pp. 1015–1022. Toronto, Canada (May 10–16 2010).12. van Leeuwen, C.J., Pawe lczak, P.: CoCoA: A non-iterative approach to a localsearch (A)DCOP solver. AAAI, San Fransisco, CA, USA (Feb 4–11 2017).13. van Leeuwen, C.J., Yıldırım, K.S., Pawe lczak, P.: Self adaptive safe provisioningof wireless power using DCOPs. Tucson, AZ, USA (Sep 18–22 2017).14. Leite, A.R., Enembreck, F., Barth`es, J.P.A.: Distributed constraint optimizationproblems: Review and perspectives. Expert Systems with Applications (11),5139–5157 (Sep 2014).15. Maheswaran, R.T., Pearce, J.P., Tambe, M.: Distributed algorithms for DCOP: Agraphical-game-based approach. In: Proceedings of the International Conference onParallel and Distributed Computing Systems. pp. 432–439. ISCA, San Fransisco,CA, USA (Sep 15–17 2004).16. Meisels, A., Kaplansky, E., Razgon, I., Zivan, R.: Comparing performance of dis-tributed constraints processing algorithms. pp. 86–93. Bologna, Italy (Jul 15–192002).17. Minton, S., Johnston, M.D., Philips, A.B., Laird, P.: Minimizing conflicts: a heuris-tic repair method for constraint satisfaction and scheduling problems (1-3), 161–205 (Dec 1992).18. Modi, P.J.: Distributed Constraint Optimization for Multiagent Systems. Ph.D.thesis, University of Southern California, Los Angeles, CA, USA (2003).19. Modi, P.J., Shen, W.M., Tambe, M., Yokoo, M.: ADOPT: asynchronous distributedconstraint optimization with quality guarantees (1–2), 149–180 (Jan 2005).20. Okamoto, S., Zivan, R., Nahon, A.: Distributed breakout: Beyond satisfaction. pp.447–453. New York, NY, USA (Jun 9–15 2016).21. Petcu, A., Faltings, B.: A scalable method for multiagent constraint optimization.pp. 266–271. Edinburgh, Scotland, UK (Jul 30–Aug 5 2005).6 Cornelis Jan van Leeuwen and Przemyz law Pawe lczak22. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.: A novel population initializationmethod for accelerating evolutionary algorithms (10), 1605–1614 (2007).23. Rust, P., Picard, G., Ramparany, F.: Using message-passing DCOP algorithms tosolve energy-efficient smart environment configuration problems. pp. 468–474. NewYork, NY, USA (Jul 9–15 2016).24. Yam, J.Y., Chow, T.W.: A weight initialization method for improving trainingspeed in feedforward neural network. Neurocomputing (1), 219–232 (2000).25. Yedidsion, H., Zivan, R., Farinelli, A.: Explorative max-sum for teams of mobilesensing agents. pp. 549–556. Paris, France (May 5–9 2014).26. Yeoh, W., Yokoo, M.: Distributed problem solving. AI Magazine (3), 53–65(2012).27. Zhang, W., Wang, G., Zhao, X., Wittenburg, L.: Distributed stochastic searchand distributed breakout: properties, comparison and applications to constraintoptimization problems in sensor networks (1), 55–87 (Jan 2005).28. Zivan, R., Okamoto, S., Peled, H.: Explorative anytime local search for distributedconstraint optimization212