Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jordi Sabater-Mir is active.

Publication


Featured researches published by Jordi Sabater-Mir.


Artificial Intelligence Review | 2013

Computational trust and reputation models for open multi-agent systems: a review

Isaac Pinyol; Jordi Sabater-Mir

In open environments, agents depend on reputation and trust mechanisms to evaluate the behavior of potential partners. The scientific research in this field has considerably increased, and in fact, reputation and trust mechanisms have been already considered a key elements in the design of multi-agent systems. In this paper we provide a survey that, far from being exhaustive, intends to show the most representative models that currently exist in the literature. For this enterprise we consider several dimensions of analysis that appeared in three existing surveys, and provide new dimensions that can be complementary to the existing ones and that have not been treated directly. Moreover, besides showing the original classification that each one of the surveys provide, we also classify models that where not taken into account by the original surveys. The paper illustrates the proliferation in the past few years of models that follow a more cognitive approach, in which trust and reputation representation as mental attitudes is as important as the final values of trust and reputation. Furthermore, we provide an objective definition of trust, based on Castelfranchi’s idea that trust implies a decision to rely on someone.


international joint conference on artificial intelligence | 2011

Social instruments for robust convention emergence

Daniel Villatoro; Jordi Sabater-Mir; Sandip Sen

We present the notion of Social Instruments as mechanisms that facilitate the emergence of conventions from repeated interactions between members of a society. Specifically, we focus on two social instruments: rewiring and observation. Our main goal is to provide agents with tools that allow them to leverage their social network of interactions when effectively addressing coordination and learning problems, paying special attention to dissolving metastable subconventions. Our initial experiments throw some light on how Self-Reinforcing Substructures (SRS) in the network prevent full convergence to society-wide conventions, resulting in reduced convergence rates. The use of an effective composed social instrument, observation + rewiring, allow agents to achieve convergence by eliminating the subconventions that otherwise remained meta-stable.


International Journal of Approximate Reasoning | 2007

On representation and aggregation of social evaluations in computational trust and reputation models

Jordi Sabater-Mir; Mario Paolucci

Interest for computational trust and reputation models is on the rise. One of the most important aspects of these models is how they deal with information received from other individuals. More generally, the critical choice is how to represent and how to aggregate social evaluations. In this article, we make an analysis of the current approaches of representation and aggregation of social evaluations under the guidelines of a set of basic requirements. Then we present two different proposals of dealing with uncertainty in the context of the Repage system [J. Sabater, M. Paolucci, R. Conte, Repage: Reputation and image among limited autonomous partners, Journal of Artificial Societies and Social Simulation 9 (2). URL http://jasss.soc.surrey.ac.uk/9/2/3.html], a computational module for management of reputational information based on a cognitive model of imAGE, REPutation and their interplay already developed by the authors. We finally discuss these two proposals in the context of several examples.


web intelligence | 2009

Topology and Memory Effect on Convention Emergence

Daniel Villatoro; Sandip Sen; Jordi Sabater-Mir

Social conventions are useful self-sustaining protocols for groups to coordinate behavior without a centralized entity enforcing coordination. We perform an in-depth study of different network structures, to compare and evaluate the effects of different network topologies on the success and rate of emergence of social conventions. While others have investigated memory for learning algorithms, the effects of memory or history of past activities on the reward received by interacting agents have not been adequately investigated. We propose a reward metric that takes into consideration the past action choices of the interacting agents. The research question to be answered is what effect does the history based reward function and the learning approach have on convergence time to conventions in different topologies. We experimentally investigate the effects of history size, agent population size and neighborhood size the emergence of social conventions.


international joint conference on artificial intelligence | 2011

Dynamic sanctioning for robust and cost-efficient norm compliance

Daniel Villatoro; Giulia Andrighetto; Jordi Sabater-Mir; Rosaria Conte

As explained by Axelrod in his seminal work An Evolutionary Approach to Norms, punishment is a key mechanism to achieve the necessary social control and to impose social norms in a self-regulated society. In this paper, we distinguish between two enforcing mechanisms. i.e. punishment and sanction, focusing on the specific ways in which they favor the emergence and maintenance of cooperation. The key research question is to find more stable and cheaper mechanisms for norm compliance in hybrid social environments (populated by humans and computational agents). To achieve this task, we have developed a normative agent able to punish and sanction defectors and to dynamically choose the right amount of punishment and sanction to impose on them (Dynamic Adaptation Heuristic). The results obtained through agent-based simulation show us that sanction is more effective and less costly than punishment in the achievement and maintenance of cooperation and it makes the population more resilient to sudden changes than if it were enforced only by mere punishment.


PLOS ONE | 2013

Punish and Voice: Punishment Enhances Cooperation when Combined with Norm-Signalling

Giulia Andrighetto; Jordi Brandts; Rosaria Conte; Jordi Sabater-Mir; Hector Solaz; Daniel Villatoro

Material punishment has been suggested to play a key role in sustaining human cooperation. Experimental findings, however, show that inflicting mere material costs does not always increase cooperation and may even have detrimental effects. Indeed, ethnographic evidence suggests that the most typical punishing strategies in human ecologies (e.g., gossip, derision, blame and criticism) naturally combine normative information with material punishment. Using laboratory experiments with humans, we show that the interaction of norm communication and material punishment leads to higher and more stable cooperation at a lower cost for the group than when used separately. In this work, we argue and provide experimental evidence that successful human cooperation is the outcome of the interaction between instrumental decision-making and the norm psychology humans are provided with. Norm psychology is a cognitive machinery to detect and reason upon norms that is characterized by a salience mechanism devoted to track how much a norm is prominent within a group. We test our hypothesis both in the laboratory and with an agent-based model. The agent-based model incorporates fundamental aspects of norm psychology absent from previous work. The combination of these methods allows us to provide an explanation for the proximate mechanisms behind the observed cooperative behaviour. The consistency between the two sources of data supports our hypothesis that cooperation is a product of norm psychology solicited by norm-signalling and coercive devices.


Autonomous Agents and Multi-Agent Systems | 2012

Reputation-based decisions for logic-based cognitive agents

Isaac Pinyol; Jordi Sabater-Mir; Pilar Dellunde; Mario Paolucci

Computational trust and reputation models have been recognized as one of the key technologies required to design and implement agent systems. These models manage and aggregate the information needed by agents to efficiently perform partner selection in uncertain situations. For simple applications, a game theoretical approach similar to that used in most models can suffice. However, if we want to undertake problems found in socially complex virtual societies, we need more sophisticated trust and reputation systems. In this context, reputation-based decisions that agents make take on special relevance and can be as important as the reputation model itself. In this paper, we propose a possible integration of a cognitive reputation model, Repage, into a cognitive BDI agent. First, we specify a belief logic capable to capture the semantics of Repage information, which encodes probabilities. This logic is defined by means of a two first-order languages hierarchy, allowing the specification of axioms as first-order theories. The belief logic integrates the information coming from Repage in terms if image and reputation, and combines them, defining a typology of agents depending of such combination. We use this logic to build a complete graded BDI model specified as a multi-context system where beliefs, desires, intentions and plans interact among each other to perform a BDI reasoning. We conclude the paper with an example and a related work section that compares our approach with current state-of-the-art models.


Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 2005

Trusting agents for trusting electronic societies

Rino Falcone; Kathleen S Barber; Jordi Sabater-Mir; Munindar P. Singh

In this paper we use recursive modelling to formalize sanction-based obligations in a qualitative game theory. In particular, we formalize an agent who attributes mental attitudes such as goals and desires to the normative system which creates and enforces its obligations. The wishes (goals) of the normative system are the commands (obligations) of the agent. Since the agent is able to reason about the normative system’s behavior, our model accounts for many ways in which an agent can violate a norm believing that it will not be sanctioned. We thus propose a cognitive theory of normative reasoning which can be applied in theories requiring dynamic trust to understand when it is necessary to revise it.


Knowledge Based Systems | 2013

Decision making matters: A better way to evaluate trust models

David Jelenc; Ramón Hermoso; Jordi Sabater-Mir; Denis Trček

Trust models are mechanisms that predict behavior of potential interaction partners. They have been proposed in several domains and many advances in trust formation have been made recently. The question of comparing trust models, however, is still without a clear answer. Traditionally, authors set up ad hoc experiments and present evaluation results that are difficult to compare - sometimes even interpret - in the context of other trust models. As a solution, the community came up with common evaluation platforms, called trust testbeds. In this paper we expose shortcomings of evaluation models that existing testbeds use; they evaluate trust models by combining them with some ad hoc decision making mechanism and then evaluate the quality of trust-based decisions. They assume that if all trust models use the same decision making mechanism, the mechanism itself becomes irrelevant for the evaluation. We hypothesized that the choice of decision making mechanism is in fact relevant. To test our claim we built a testbed, called Alpha testbed, that can evaluate trust models either with or without decision making mechanism. With it we evaluated five well-known trust models using two different decision making mechanisms. The results confirm our hypothesis; the choice of decision making mechanisms influences the performance of trust models. Based on our findings, we recommend to evaluate trust models independently of the decision making mechanism - and we also provide a method (and a tool) to do so.


Journal of Logic and Computation | 2013

Opening the black box of trust

Andrew Koster; W. Marco Schorlemmer; Jordi Sabater-Mir

Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework.

Collaboration


Dive into the Jordi Sabater-Mir's collaboration.

Top Co-Authors

Avatar

Daniel Villatoro

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Isaac Pinyol

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Rosaria Conte

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Nardine Osman

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Koster

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mario Paolucci

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Jordi Brandts

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Marco Schorlemmer

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge