Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claudia V. Goldman is active.

Publication


Featured researches published by Claudia V. Goldman.


Journal of Artificial Intelligence Research | 2004

Decentralized control of cooperative systems: categorization and complexity analysis

Claudia V. Goldman; Shlomo Zilberstein

Decentralized control of cooperative systems captures the operation of a group of decision-makers that share a single global objective. The difficulty in solving optimally such problems arises when the agents lack full observability of the global state of the system when they operate. The general problem has been shown to be NEXP-complete. In this paper, we identify classes of decentralized control problems whose complexity ranges between NEXP and P. In particular, we study problems characterized by independent transitions, independent observations, and goal-oriented objective functions. Two algorithms are shown to solve optimally useful classes of goal-oriented decentralized processes in polynomial time. This paper also studies information sharing among the decision-makers, which can improve their performance. We distinguish between three ways in which agents can exchange information: indirect communication, direct communication and sharing state features that are not controlled by the agents. Our analysis shows that for every class of problems we consider, introducing direct or indirect communication does not change the worst-case complexity. The results provide a better understanding of the complexity of decentralized control problems that arise in practice and facilitate the development of planning algorithms for these problems.


adaptive agents and multi-agents systems | 2003

Optimizing information exchange in cooperative multi-agent systems

Claudia V. Goldman; Shlomo Zilberstein

Decentralized control of a cooperative multi-agent system is the problem faced by multiple decision-makers that share a common set of objectives. The decision-makers may be robots placed at separate geographical locations or computational processes distributed in an information space. It may be impossible or undesirable for these decision-makers to share all their knowledge all the time. Furthermore, exchanging information may incur a cost associated with the required bandwidth or with the risk of revealing it to competing agents. Assuming that communication may not be reliable adds another dimension of complexity to the problem.This paper develops a decision-theoretic solution to this problem, treating both standard actions and communication as explicit choices that the decision maker must consider. The goal is to derive both action policies and communication policies that together optimize a global value function. We present an analytical model to evaluate the trade-off between the cost of communication and the value of the information received. Finally, to address the complexity of this hard optimization problem, we develop a practical approximation technique based on myopic meta-level control of communication.


adaptive agents and multi-agents systems | 2003

Transition-independent decentralized markov decision processes

Raphen Becker; Shlomo Zilberstein; Victor R. Lesser; Claudia V. Goldman

There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied together through a global reward function that depends upon both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.


adaptive agents and multi-agents systems | 2003

Self-organization through bottom-up coalition formation

Mark Sims; Claudia V. Goldman; Victor R. Lesser

We present a distributed approach to self-organization in a distributed sensor network. The agents in the system use a series of negotiations incrementally to form appropriate coalitions of sensor and processing resources.Since the system is cooperative, we have developed a range of protocols that allow the agents to share meta-level information before they allocate resources. On one extreme the protocols are based on local utility computations, where each agent negotiates based on its local perspective. From there, a continuum of additional protocols exists in which agents base decisions on marginal social utility, the combination of an agents marginal utility and that of others. We present a formal framework that allows us to quantify how social an agent can be in terms of the set of agents that are considered and how the choice of a certain level affects the decisions made by the agents and the global utility of the organization.Our results show that by implementing social agents, we obtain an organization with a high global utility both when agents negotiate over complex contracts and when they negotiate over simple ones. The main difference between the two cases is mainly the rate of convergence. Our algorithm is incremental, and therefore the organization that evolves can adapt and stabilize as agents enter and leave the system.


adaptive agents and multi-agents systems | 2003

The complexity of multiagent systems: the price of silence

Zinovi Rabinovich; Claudia V. Goldman; Jeffrey S. Rosenschein

In this work, we suggest representing multiagent systems using computational models, choosing, specifically, Multi-Prover Interactive Protocols to represent agent systems and the interactions occurring within them. This approach enables us to analyze complexity issues related to multiagent systems. We focus here on the complexity of coordination and study the possible sources of this complexity. We show that there are complexity bounds that cannot be lowered even when approximation techniques are applied.


Autonomous Agents and Multi-Agent Systems | 2011

Practical voting rules with partial information

Meir Kalech; Sarit Kraus; Gal A. Kaminka; Claudia V. Goldman

Voting is an essential mechanism that allows multiple agents to reach a joint decision. The joint decision, representing a function over the preferences of all agents, is the winner among all possible (candidate) decisions. To compute the winning candidate, previous work has typically assumed that voters send their complete set of preferences for computation, and in fact this has been shown to be required in the worst case. However, in practice, it may be infeasible for all agents to send a complete set of preferences due to communication limitations and willingness to keep as much information private as possible. The goal of this paper is to empirically evaluate algorithms to reduce communication on various sets of experiments. Accordingly, we propose an iterative algorithm that allows the agents to send only part of their preferences, incrementally. Experiments with simulated and real-world data show that this algorithm results in an average of 35% savings in communications, while guaranteeing that the actual winning candidate is revealed. A second algorithm applies a greedy heuristic to save up to 90% of communications. While this heuristic algorithm cannot guarantee that a true winning candidate is found, we show that in practice, close approximations are obtained.


ACM Transactions on Intelligent Systems and Technology | 2015

Strategic Information Disclosure to People with Multiple Alternatives

Amos Azaria; Zinovi Rabinovich; Claudia V. Goldman; Sarit Kraus

In this article, we study automated agents that are designed to encourage humans to take some actions over others by strategically disclosing key pieces of information. To this end, we utilize the framework of persuasion games—a branch of game theory that deals with asymmetric interactions where one player (Sender) possesses more information about the world, but it is only the other player (Receiver) who can take an action. In particular, we use an extended persuasion model, where the Sender’s information is imperfect and the Receiver has more than two alternative actions available. We design a computational algorithm that, from the Sender’s standpoint, calculates the optimal information disclosure rule. The algorithm is parameterized by the Receiver’s decision model (i.e., what choice he will make based on the information disclosed by the Sender) and can be retuned accordingly. We then provide an extensive experimental study of the algorithm’s performance in interactions with human Receivers. First, we consider a fully rational (in the Bayesian sense) Receiver decision model and experimentally show the efficacy of the resulting Sender’s solution in a routing domain. Despite the discrepancy in the Sender’s and the Receiver’s utilities from each of the Receiver’s choices, our Sender agent successfully persuaded human Receivers to select an option more beneficial for the agent. Dropping the Receiver’s rationality assumption, we introduce a machine learning procedure that generates a more realistic human Receiver model. We then show its significant benefit to the Sender solution by repeating our routing experiment. To complete our study, we introduce a second (supply--demand) experimental domain and, by contrasting it with the routing domain, obtain general guidelines for a Sender on how to construct a Receiver model.


Autonomous Agents and Multi-Agent Systems | 2007

Learning to communicate in a decentralized environment

Claudia V. Goldman; Martin Allen; Shlomo Zilberstein

Learning to communicate is an emerging challenge in AI research. It is known that agents interacting in decentralized, stochastic environments can benefit from exchanging information. Multi-agent planning generally assumes that agents share a common means of communication; however, in building robust distributed systems it is important to address potential miscoordination resulting from misinterpretation of messages exchanged. This paper lays foundations for studying this problem, examining its properties analytically and empirically in a decision-theoretic context. We establish a formal framework for the problem, and identify a collection of necessary and sufficient properties for decision problems that allow agents to employ probabilistic updating schemes in order to learn how to interpret what others are communicating. Solving the problem optimally is often intractable, but our approach enables agents using different languages to converge upon coordination over time. Our experimental work establishes how these methods perform when applied to problems of varying complexity.


international joint conference on artificial intelligence | 1995

Learn Your Opponent's Strategy (in Polynominal Time)!

Yishay Mor; Claudia V. Goldman; Jeffrey S. Rosenschein

Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world.


Autonomous Agents and Multi-Agent Systems | 2016

Strategic advice provision in repeated human-agent interactions

Amos Azaria; Ya'akov Gal; Sarit Kraus; Claudia V. Goldman

This paper addresses the problem of automated advice provision in scenarios that involve repeated interactions between people and computer agents. This problem arises in many applications such as route selection systems, office assistants and climate control systems. To succeed in such settings agents must reason about how their advice influences people’s future actions or decisions over time. This work models such scenarios as a family of repeated bilateral interaction called “choice selection processes”, in which humans or computer agents may share certain goals, but are essentially self-interested. We propose a social agent for advice provision (SAP) for such environments that generates advice using a social utility function which weighs the sum of the individual utilities of both agent and human participants. The SAP agent models human choice selection using hyperbolic discounting and samples the model to infer the best weights for its social utility function. We demonstrate the effectiveness of SAP in two separate domains which vary in the complexity of modeling human behavior as well as the information that is available to people when they need to decide whether to accept the agent’s advice. In both of these domains, we evaluated SAP in extensive empirical studies involving hundreds of human subjects. SAP was compared to agents using alternative models of choice selection processes informed by behavioral economics and psychological models of decision-making. Our results show that in both domains, the SAP agent was able to outperform alternative models. This work demonstrates the efficacy of combining computational methods with behavioral economics to model how people reason about machine-generated advice and presents a general methodology for agent-design in such repeated advice settings.

Collaboration


Dive into the Claudia V. Goldman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey S. Rosenschein

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Amos Azaria

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shlomo Zilberstein

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Avi Rosenfeld

Jerusalem College of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor R. Lesser

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge