Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Marc Thévenin is active.

Publication


Featured researches published by Jean-Marc Thévenin.


CLIMA'11 Proceedings of the 12th international conference on Computational logic in multi-agent systems | 2011

A modal framework for relating belief and signed information

Emiliano Lorini; Laurent Perrussel; Jean-Marc Thévenin

The aim of this paper is to propose a modal framework for reasoning about signed information. This modal framework allows agents to keep track of information source as long as they receive information in a multi-agent system. Agents gain that they can elaborate and justify their own current belief state by considering a reliability relation over the sources of information. The belief elaboration process is considered under two perspectives: (i) from a static point of view an agent aggregates received signed information according to its preferred sources in order to build its belief and (ii) from a dynamic point of view as an agent receives information it adapts its belief state about signed information. Splitting the notions of beliefs and signed statement is useful for handling the underlying trust issue: an agent believes some statement because it may justify the statements origin and its reliability.


adaptive agents and multi-agents systems | 2004

A Logical Approach for Describing (Dis)Belief Change and Message Processing

Laurent Perrussel; Jean-Marc Thévenin

This paper focuses on the features of two KQML performatives, namely tell and untell, in the context of nonprioritized belief change. Tell allows agents to send beliefs while untell allows agents to send explicit disbeliefs. In a multi agent system, agents have to change their belief when they receive new information from other agents. They may revise or contract their belief state accordingly. The revision action consists of inserting a new belief in a beliefs set while the contraction action consists of managing a set of disbeliefs. Whenever incoming information entails inconsistencies in an agentýs belief state, the agent must either drop some beliefs or refuse the incoming statement. For this, agents consider a preference relation over other agents embedded in the multi agent system and may reject new information based on their belief state and their preference relation. In this article, we survey a logic-based framework for handling messages and (dis)beliefs change. In this context, we formally describe the consequences of tell and untell performatives.


ArgMAS'07 Proceedings of the 4th international conference on Argumentation in multi-agent systems | 2007

A persuasion dialog for gaining access to information

Laurent Perrussel; Sylvie Doutre; Jean-Marc Thévenin; Peter McBurney

This paper presents a formal protocol for agents engaged in argumentation over access to information sources. Obtaining relevant information is essential for agents engaged in autonomous, goal-directed behavior, but access to such information is usually controlled by other autonomous agents having their own goals. Because these various goals may be in conflict with one another, rational interactions between the two agents may take the form of a dialog, in which requests for information are successively issued, considered, justified and criticized. Even when the agents involved in such discussions agree on all the arguments for and the arguments against granting access to some information source, they may still disagree on their preferences between these arguments. To represent such situations, we design a protocol for dialogs between two autonomous agents for seeking and granting authorization to access some information source. This protocol is based on an argumentation dialog where agents handle specific preferences and acceptability over arguments. We show how this argumentation framework provides a semantics to the protocol dedicated to the exchange of arguments, and we illustrate the proposed framework with an example in medicine.


australasian joint conference on artificial intelligence | 2009

Experimental Market Mechanism Design for Double Auction

Masabumi Furuhata; Laurent Perrussel; Jean-Marc Thévenin; Dongmo Zhang

In this paper, we introduce an experimental approach to the design, analysis and implementation of market mechanisms based on double auction. We define a formal market model that specifies the market policies in a double auction market. Based on this model, we introduce a set of criteria for the evaluation of market mechanisms. We design and implement a set of market policies and test them with different experimental settings. The results of experiments provide us a better understanding of the interrelationship among market policies and also show that an experimental approach can greatly improve the efficiency and effectiveness of market mechanism design.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 2015

Consistency-Based Reliability Assessment

Laurence Cholvy; Laurent Perrussel; William Raynaut; Jean-Marc Thévenin

This paper addresses the question of assessing the relative reliability of unknown information sources. We propose to consider a phase during which the consistency of information they report is analysed, whether it is the consistency of each single report, or the consistency of a report w.r.t. some trusted knowledge or the consistency of different reports together. We adopt an axiomatic approach by first giving postulates which characterize how the resulting reliability preorder should be; then we define a family of operators for building this preorder and demonstrate that it satisfies the proposed postulates.


adaptive agents and multi-agents systems | 2007

Arguing for gaining access to information

Sylvie Doutre; Peter McBurney; Laurent Perrussel; Jean-Marc Thévenin

This paper presents a protocol for agents engaged in argumentation over access to information sources. Obtaining relevant information is essential for agents engaged in autonomous, goal-directed behavior, but access to such information is usually controlled by other autonomous agents having their own goals. Because these various goals may be in conflict with one another, rational interactions between the two agents may take the form of a dialog, in which requests for information are successively issued, considered, justified and criticized. Even when the agents involved in such discussions agree on all the arguments for and the arguments against granting access to some information source, they may still disagree on their preferences between these arguments. To represent such situations, we design a protocol for dialogs between two autonomous agents for seeking and granting authorization to access some information source. This protocol is based on an argumentation dialog where agents handle specific preferences and acceptability over arguments.


International Journal of Approximate Reasoning | 2017

Using inconsistency measures for estimating reliability

Laurence Cholvy; Laurent Perrussel; Jean-Marc Thévenin

Any decision taken by an agent requires some knowledge of its environment. Communication with other agents is a key issue for assessing the overall quality of its own knowledge. This assessment is a challenge itself as the agent may receive information from unknown agents. The aim of this paper is to propose a framework for assessing the reliability of unknown agents based on communication. We assume that information is represented through logical statements and logical inconsistency is the underlying notion of reliability assessment. In our context, assessing consists of ranking the agents and representing reliability through a total preorder. The overall communication set is first evaluated with the help of inconsistency measures. Next, the measures are used for assessing the contribution of each agent to the overall inconsistency of the communication set. After stating the postulates specifying the expected properties of the reliability preorder, we show through a representation theorem how these postulates and the contribution of the agent are interwoven. We also detail how the properties of the inconsistency measures influence the properties of the contribution assessment. Finally we describe how to aggregate different reliability preorders, each of them may be based on different inconsistency measures.


adaptive agents and multi-agents systems | 2006

Mutual enrichment through nested belief change

Laurent Perrussel; Jean-Marc Thévenin; Thomas Andreas Meyer

We investigate the dynamics of nested beliefs in the context of agent interactions. Nested beliefs represent what agents believe about the beliefs of other agents. We consider the tell KQML performative which allows agents to send their own beliefs to others. Whenever agents accept a new belief, or refuse to change their own beliefs after receiving a message, both receiver and sender enrich their nested beliefs by refining their beliefs about the other agents beliefs, as well as the preferences of the other agent. The main objective of nested beliefs is to improve cooperation between agents. We propose a logical framework for the acquisition process of nested beliefs and preferences. This acquisition process is the first step toward the elaboration of sophisticated interaction protocols. In this short paper we provide an informal outline of the framework, guided by an intuitive running example.


european conference on logics in artificial intelligence | 2012

Relevant minimal change in belief update

Laurent Perrussel; Jerusa Marchi; Jean-Marc Thévenin; Dongmo Zhang

The notion of relevance was introduced by Parikh in the belief revision field for handling minimal change. It prevents the loss of beliefs that do not have connections with the epistemic input. But, the problem of minimal change and relevance is still an open issue in belief update. In this paper, a new framework for handling minimal change and relevance in the context of belief update is introduced. This framework goes beyond relevance in Parikhs sense and enforces minimal change by first rewriting the Katzuno-Mendelzon postulates for belief update and second by introducing a new relevance postulate. We show that relevant minimal change can be characterized by setting agents preferences on beliefs where preferences are indexed by subsets of models of the belief set. Each subset represents a prime implicant of the belief set and thus stresses the key propositional symbols for representing the belief set.


International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2005

(DIS)BELIEF CHANGE AND ARGUED FEED-BACK DIALOG

Laurent Perrussel; Jean-Marc Thévenin

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents t...

Collaboration


Dive into the Jean-Marc Thévenin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongmo Zhang

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurence Cholvy

National Polytechnic Institute of Toulouse

View shared research outputs
Top Co-Authors

Avatar

Masabumi Furuhata

University of Western Sydney

View shared research outputs
Researchain Logo
Decentralizing Knowledge