Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radu Jurca is active.

Publication


Featured researches published by Radu Jurca.


adaptive agents and multi-agents systems | 2003

An incentive compatible reputation mechanism

Radu Jurca; Boi Faltings

Traditional centralized approaches to security are difficult to apply to large, distributed marketplaces in which software agents operate. Developing a notion of trust that is based on the reputation of agents can provide a softer notion of security that is sufficient for many multi-agent applications. We address the issue of incentive-compatibility (i.e. how to make it optimal for agents to share reputation information truthfully), by introducing a side-payment scheme, organized through a set of broker agents, that makes it rational for software agents to truthfully share the reputation information they have acquired in their past experience. We also show how to use a cryptographic mechanism to protect the integrity of reputation information and to achieve a tight bounding between the identity and reputation of an agent.


international world wide web conferences | 2007

Reliable QoS monitoring based on client feedback

Radu Jurca; Boi Faltings; Walter Binder

Service-level agreements (SLAs) establish a contract between service providersand clients concerning Quality of Service (QoS) parameters. Without properpenalties, service providers have strong incentives to deviate from theadvertised QoS, causing losses to the clients. Reliable QoS monitoring (andproper penalties computed on the basis of delivered QoS) are thereforeessential for the trustworthiness of a service-oriented environment. In thispaper, we present a novel QoS monitoring mechanism based on quality ratings from theclients. A reputation mechanism collects the ratings and computes theactual quality delivered to the clients. The mechanism provides incentives forthe clients to report honestly, and pays special attention to minimizing costand overhead1.


electronic commerce | 2006

Minimum payments that reward honest reputation feedback

Radu Jurca; Boi Faltings

Online reputation mechanisms need honest feedback to function effectively. Self interested agents report the truth only when explicit rewards offset the cost of reporting and the potential gains that can be obtained from lying. Side-payment schemes (monetary rewards for submitted feedback) can make truth-telling rational based on the correlation between the reports of different buyers.In this paper we use the idea of automated mechanism design to construct the payments that minimize the budget required by an incentive-compatible reputation mechanism. Such payment schemes are defined by a linear optimization problem that can be solved efficiently in realistic settings. Furthermore, we investigate two directions for further lowering the cost of incentive-compatibility: using several reference reports to construct the side-payments, and filtering out reports that are probably false.


Journal of Artificial Intelligence Research | 2009

Mechanisms for making crowds truthful

Radu Jurca; Boi Faltings

We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism design to specify an algorithm for deriving an efficient reward mechanism.


electronic commerce | 2007

Collusion-resistant, incentive-compatible feedback payments

Radu Jurca; Boi Faltings

Online reputation mechanisms need honest feedback to function effectively. Self-interested agents report the truth only when explicit rewards offset the potential gains obtained from lying. Feedback payment schemes (monetary rewardsfor submitted feedback) can make truth-telling rational based on the correlation between the reports of different buyers. In this paper we investigate incentive-compatible payment mechanisms that are also resistant to collusion: groups of agents cannot collude on a lying strategy without suffering monetary losses. We analyze several scenarios, where, for example, some or all of the agents collude. For each scenario we investigate both existential and implementation problems. Throughout the paper we use automated mechanism design to compute the best possible mechanism for a given setting.


international conference on service oriented computing | 2005

Reputation-Based service level agreements for web services

Radu Jurca; Boi Faltings

Most web services need to be contracted through service level agreements that typically specify a certain quality of service (QoS) in return for a certain price. We propose a new form of service level agreement where the price is determined by the QoS actually delivered. We show that such agreements make it optimal for the service provider to deliver the service at the promised quality. To allow efficient monitoring of the actual QoS, we introduce a reputation mechanism. A scoring rule makes it optimal for the users of a service to correctly report the QoS they observed. Thus, we obtain a practical scheme for service-level agreements that makes it uninteresting for providers to deviate from their best effort.


electronic commerce | 2007

Understanding user behavior in online feedback reporting

Arjun Talwar; Radu Jurca; Boi Faltings

Online reviews have become increasingly popular as a way to judge the quality of various products and services. Previous work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult. In this paper, we investigate underlying factors that influence user behavior when reporting feedback. We look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports. We first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature. Second, we show that a users rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews. Both give us a less noisy way to produce rating estimates and reveal the reasons behind user bias.Our hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.


acm special interest group on data communication | 2005

Reputation-based pricing of P2P services

Radu Jurca; Boi Faltings

In the future peer-to-peer service oriented computing systems, maintaining a cooperative equilibrium is a non-trivial task. In the absence of Trusted Third Parties (TTPs) or verification authorities, rational service providers minimize their costs by providing ever degrading service quality levels. Anticipating this, rational clients are willing to pay only the minimum amounts (often zero) which leads to the collapse of the market.In this paper, we show how a simple reputation mechanism can be used to overcome this moral hazard problem. The mechanism does not act by social exclusion (i.e. exclude providers that cheat) but rather by allowing flexible service level agreements in which quality can be traded for the price. We show that such a mechanism can drive service providers of different types to exert the social efficient effort levels.


Journal of Artificial Intelligence Research | 2007

Obtaining reliable feedback for sanctioning reputation mechanisms

Radu Jurca; Boi Faltings

Reputation mechanisms offer an effective alternative to verification authorities for building trust in electronic markets with moral hazard. Future clients guide their business decisions by considering the feedback from past transactions; if truthfully exposed, cheating behavior is sanctioned and thus becomes irrational. It therefore becomes important to ensure that rational clients have the right incentives to report honestly. As an alternative to side-payment schemes that explicitly reward truthful reports, we show that honesty can emerge as a rational behavior when clients have a repeated presence in the market. To this end we describe a mechanism that supports an equilibrium where truthful feedback is obtained. Then we characterize the set of pareto-optimal equilibria of the mechanism, and derive an upper bound on the percentage of false reports that can be recorded by the mechanism. An important role in the existence of this bound is played by the fact that rational clients can establish a reputation for reporting honestly.


TADA/AMEC'06 Proceedings of the 2006 AAMAS workshop and TADA/AMEC 2006 conference on Agent-mediated electronic commerce: automated negotiation and strategy design for electronic markets | 2006

Robust incentive-compatible feedback payments

Radu Jurca; Boi Faltings

Online reputation mechanisms need honest feedback to function effectively. Self interested agents report the truth only when explicit rewards offset the cost of reporting and the potential gains that can be obtained from lying. Payment schemes (monetary rewards for submitted feedback) can make truth-telling rational based on the correlation between the reports of different clients. In this paper we use the idea of automated mechanism design to construct the best (i.e., budget minimizing) incentive-compatible payments that are also robust to some degree of private information.

Collaboration


Dive into the Radu Jurca's collaboration.

Top Co-Authors

Avatar

Boi Faltings

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arjun Talwar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Florent Garcin

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Goran Radanovic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Laurent Grangier

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michael Schumacher

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Jason Jingshi Li

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge