Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian Petcu is active.

Publication


Featured researches published by Adrian Petcu.


adaptive agents and multi-agents systems | 2006

MDPOP: faithful distributed implementation of efficient social choice problems

Adrian Petcu; Boi Faltings; David C. Parkes

We model social choice problems in which self interested agents with private utility functions have to agree on values for a set of variables subject to side constraints. The goal is to implement the efficient solution, maximizing the total utility across all agents. Existing techniques for this problem fall into two groups. Distributed constraint optimization algorithms can find the solution without any central authority but are vulnerable to manipulation. Incentive compatible mechanisms can ensure that agents report truthful information about their utilities and prevent manipulation of the outcome but require centralized computation.Following the agenda of distributed implementation [16], we integrate these methods and introduce MDPOP, the first distributed optimization protocol that faithfully implements the VCG mechanism for this problem of efficient social choice. No agent can benefit by unilaterally deviating from any aspect of the protocol, neither information-revelation, computation, nor communication. The only central authority required is a bank that can extract payments from agents. In addition, we exploit structure in the problem and develop a faithful method to redistribute some of the VCG payments back to agents. Agents need only communicate with other agents that have an interest in the same variable, and provided that the distributed optimization itself scales the entire method scales to problems of unbounded size.


principles and practice of constraint programming | 2005

Approximations in distributed optimization

Adrian Petcu; Boi Faltings

We present a parameterized approximation scheme for distributed combinatorial optimization problems based on dynamic programming. The algorithm is a utility propagation method and requires a linear number of messages. For exact computation, the size of the largest message is exponential in the width of the constraint graph. We present a distributed approximation scheme where the size of the largest message can be adapted to the desired approximation ratio, α. The process is similar to a distributed version of the minibucket elimination scheme, performed on a DFS traversal of the problem. The second part of this paper presents an anytime version of the algorithm, that is suitable for very large, distributed problems, where the propagations may take too long to complete. Simulation results show that these algorithms are a viable approach to real world, loose optimization problems, possibly of unbounded size.


ieee wic acm international conference on intelligent agent technology | 2007

Optimal Solution Stability in Dynamic, Distributed Constraint Optimization

Adrian Petcu; Boi Faltings

We define the distributed, continuous-time combinatorial optimization problem. We propose a new notion of solution stability in dynamic optimization, based on the cost of change from an already-implemented solution to the new one. Change costs are modeled with stability constraints, and can evolve over time. We present RSDPOP, a self-stabilizing optimization algorithm which guarantees optimal solution stability in dynamic environments, based on this definition. In contrast to current approaches which solve sequences of static CSPs, our mechanism has a lot more flexibility: each variable can be assigned and reassigned its own commitment deadlines at any point in time. Therefore, the optimization process is continuous, rather than a sequence of solving problem snapshots. We present experimental results from the distributed meeting scheduling domain.


Journal of Artificial Intelligence Research | 2008

M-DPOP: faithful distributed implementation of efficient social choice problems

Adrian Petcu; Boi Faltings; David C. Parkes

In the efficient social choice problem, the goal is to assign values, subject to side constraints, to a set of variables to maximize the total utility across a population of agents, where each agent has private information about its utility function. In this paper we model the social choice problem as a distributed constraint optimization problem (DCOP), in which each agent can communicate with other agents that share an interest in one or more variables. Whereas existing DCOP algorithms can be easily manipulated by an agent, either by misreporting private information or deviating from the algorithm, we introduce M-DPOP, the first DCOP algorithm that provides a faithful distributed implementation for efficient social choice. This provides a concrete example of how the methods of mechanism design can be unified with those of distributed optimization. Faithfulness ensures that no agent can benefit by unilaterally deviating from any aspect of the protocol, neither information-revelation, computation, nor communication, and whatever the private information of other agents. We allow for payments by agents to a central bank, which is the only central authority that we require. To achieve faithfulness, we carefully integrate the Vickrey-Clarke-Groves (VCG) mechanism with the DPOP algorithm, such that each agent is only asked to perform computation, report information, and send messages that is in its own best interest. Determining agent is payment requires solving the social choice problem without agent i. Here, we present a method to reuse computation performed in solving the main problem in a way that is robust against manipulation by the excluded agent. Experimental results on structured problems show that as much as 87% of the computation required for solving the marginal problems can be avoided by re-use, providing very good scalability in the number of agents. On unstructured problems, we observe a sensitivity of M-DPOP to the density of the problem, and we show that reusability decreases from almost 100% for very sparse problems to around 20% for highly connected problems. We close with a discussion of the features of DCOP that enable faithful implementations in this problem, the challenge of reusing computation from the main problem to marginal problems in other algorithms such as ADOPT and OptAPO, and the prospect of methods to avoid the welfare loss that can occur because of the transfer of payments to the bank.


ieee wic acm international conference on intelligent agent technology | 2007

A Hybrid of Inference and Local Search for Distributed Combinatorial Optimization

Adrian Petcu; Boi Faltings

We present a new hybrid algorithm for local search in distributed combinatorial optimization. This method is a mix between classical local search methods in which nodes take decisions based only on local information, and full inference methods that guarantee completeness. We propose LS-DPOP(k), a hybrid method that combines the advantages of both these approaches. LS-DPOP(k) is a utility propagation algorithm controlled by a parameter k which specifies the maximal allowable amount of inference. The maximal space requirements are exponential in this parameter. In the dense parts of the problem, where the required amount of inference exceeds this limit, the algorithm executes a local search procedure guided by as much inference as allowed by k. LS-DPOP(k) can be seen as a large neighborhood search, where exponential neighborhoods are rigorously determined according to problem structure, and polynomial efforts are spent for their complete exploration at each local search step. We show the efficiency of this approach with experimental results from the distributed meeting scheduling domain.We present a new hybrid algorithm for local search in distributed combinatorial optimization. This method is a mix between classical local search methods in which nodes take decisions based only on local information, and full inference methods that guarantee completeness. We propose LS-DPOP(k), a hybrid method that combines the advantages of both these approaches. LS-DPOP(k) is a utility propagation algorithm controlled by a parameter k which specifies the maximal allowable amount of inference. The maximal space requirements are exponential in this parameter. In the dense parts of the problem, where the required amount of inference exceeds this limit, the algorithm executes a local search procedure guided by as much inference as allowed by k. LS-DPOP(k) can be seen as a large neighborhood search, where exponential neighborhoods are rigorously determined according to problem structure, and polynomial efforts are spent for their complete exploration at each local search step. We show the efficiency of this approach with experimental results from the distributed meeting scheduling domain.


CSCLP'04 Proceedings of the 2004 joint ERCIM/CoLOGNET international conference on Recent Advances in Constraints | 2004

A value ordering heuristic for local search in distributed resource allocation

Adrian Petcu; Boi Faltings

In this paper we develop a localized value-ordering heuristic for distributed resource allocation problems. We show how this value ordering heuristics can be used to achieve desirable properties (increased effectiveness, or better allocations). The specific distributed resource allocation problem that we consider is sensor allocation in sensor networks, and the algorithmic skeleton that we use to experiment this heuristic is the distributed breakout algorithm. We compare this technique with the standard DBA and with another value-ordering heuristic [10] and see from the experimental results that it significantly outperforms both of them in terms of the number of cycles required to solve the problem (and therefore improvements in terms of communication and time requirements), especially when the problems are difficult. The resulting algorithm is also able to solve a higher percentage of the test problems. We show that a simple variation of this technique exhibits an interesting competition behavior that could be used to achieve higher quality allocations of the resource pool. Moreover, combinations of the two methods are possible, leading to interesting results. Finally, we note that this heuristic is domain, but not algorithm specific (meaning that it could most likely give good results in conjunction with other DisCSP algorithms as well). Content Areas: constraint satisfaction, distributed AI, problem solving


workshop on internet and network economics | 2005

Incentive compatible multiagent constraint optimization

Adrian Petcu; Boi Faltings

We present in this paper an incentive-compatible distributed optimization method applied to social choice problems. The method works by computing and collecting VCG taxes in a distributed fashion. This introduces a certain resilience to manipulation from the problem solving agents. An extension of this method sacrifices Pareto-optimality in favor of budget-balance: the solutions chosen are not optimal anymore, but the advantage is that the self interested agents pay the taxes between themselves, thus producing no tax surplus. This eliminates unwanted incentives for the problem solving agents, ensuring their faithfulness.


international joint conference on artificial intelligence | 2003

Applying interchangeability techniques to the distributed breakout algorithm

Adrian Petcu; Boi Faltings

This paper presents two methods for improving the performance of the Distributed Breakout Algorithm using the notion of interchangeability. In particular, we use neighborhood partial and full interchangeability techniques to keep conflicts localized and avoid spreading them to neighboring areas. Our experiments on distributed sensor networks show that such techniques can significantly reduce the number of cycles required to solve the problems (therefore also reduce communication and time requirements), especially on difficult problems. Moreover, the improved algorithms are able to solve a higher proportion of the test problems.


international joint conference on artificial intelligence | 2005

A scalable method for multiagent constraint optimization

Adrian Petcu; Boi Faltings


adaptive agents and multi agents systems | 2008

Decentralised coordination of low-power embedded devices using the max-sum algorithm

Alessandro Farinelli; Alex Rogers; Adrian Petcu; Nicholas R. Jennings

Collaboration


Dive into the Adrian Petcu's collaboration.

Top Co-Authors

Avatar

Boi Faltings

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akshat Kumar

Singapore Management University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marius-Calin Silaghi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Thomas Léauté

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marius Silaghi

Florida Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge