Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gauvain Bourgne is active.

Publication


Featured researches published by Gauvain Bourgne.


adaptive agents and multi-agents systems | 2007

SMILE: Sound Multi-agent Incremental LEarning

Gauvain Bourgne; Amal El Fallah Segrouchni; Henry Soldano

This article deals with the problem of collaborative learning in a multi-agent system. Here each agent can update incrementally its beliefs B (the concept representation) so that it is in a way kept consistent with the whole set of information K (the examples) that he has received from the environment or other agents. We extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent. The corresponding protocol is applied to supervised concept learning. The resulting method SMILE (standing for Sound Multi-agent Incremental LEarning) is described and experimented here. Surprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.


european conference on artificial intelligence | 2010

Abduction of distributed theories through local interactions

Gauvain Bourgne; Katsumi Inoue; Nicolas Maudet

What happens when distributed sources of information (agents) hold and acquire information locally, and have to communicate with neighbouring agents in order to refine their hypothesis regarding the actual global state of this environment? This question occurs when it is not be possible (e. g. for practical or privacy concerns) to collect observations and knowledge, and centrally compute the resulting theory. In this paper, we assume that agents are equipped with full clausal theories and individually face abductive tasks, in a globally consistent environment. We adopt a learner/critic approach. Previous work in this line mostly relied on some assumptions of compositionality (which allow to treat each piece of exchanged information separately). Because no shared background knowledge is assumed to start with, this does not hold here. We design a protocol guaranteeing convergence to a situation “sufficiently” satisfying as far as consistency of the system is concerned, and discuss its other properties.


international conference on tools with artificial intelligence | 2009

Collaborative Concept Learning: Non Individualistic vs Individualistic Agents

Gauvain Bourgne; Dominique Bouthinon; Amal El Fallah Seghrouchni; Henry Soldano

This article addresses collaborative learning in a multi-agent system: each agent revises incrementally its beliefs B (a concept representation) to keep it consistent with the whole set of information K (the examples) that he has received from the environment or other agents. In SMILE this notion of consistency was extended to a group of agents and a unique consistent concept representation was so maintained inside the group. In the present paper, we present iSMILE in which the agents still provide examples to other agents but keep their own concept representation. We will see that iSMILE is more time consuming and loses part of its learning ability, but that when agents cooperate at classification time, the group benefits from the advantages of ensemble learning.


adaptive agents and multi-agents systems | 2007

Hypotheses refinement under topological communication constraints

Gauvain Bourgne; Gael Hette; Nicolas Maudet; Suzanne Pinson

We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.


international conference on logic programming | 2015

Modelling Moral Reasoning and Ethical Responsibility with Logic Programming

Fiona Berreby; Gauvain Bourgne; Jean-Gabriel Ganascia

In this paper, we investigate the use of high-level action languages for representing and reasoning about ethical responsibility in goal specification domains. First, we present a simplified Event Calculus formulated as a logic program under the stable model semantics in order to represent situations within Answer Set Programming. Second, we introduce a model of causality that allows us to use an answer set solver to perform reasoning over the agents ethical responsibility. We then extend and test this framework against the Trolley Problem and the Doctrine of Double Effect. The overarching aim of the paper is to propose a general and adaptable formal language that may be employed over a variety of ethical scenarios in which the agents responsibility must be examined and their choices determined. Our fundamental ambition is to displace the burden of moral reasoning from the programmer to the program itself, moving away from current computational ethics that too easily embed moral reasoning within computational engines, thereby feeding atomic answers that fail to truly represent underlying dynamics.


web intelligence | 2009

Learning in a Fixed or Evolving Network of Agents

Gauvain Bourgne; Amal El Fallah-Seghrouchni; Henry Soldano

This paper investigates incremental multiagent learning in static or evolving structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, a first mechanism was proposed to deal with static networks, but its accuracy was reduced in some topologies. We propose here several possible improvements of this mechanism, whose different behaviors with respect to some efficiency requirements (redundancy, computational cost and communicational cost) are experimentally investigated. Then, we provide an experimental analysis of some variants for evolving networks.


european conference on artificial intelligence | 2014

Multi agent learning of relational action models

Christophe Rodrigues; Henry Soldano; Gauvain Bourgne; Céline Rouveirol

Multi Agent Relational Action Learning considers a community of agents, each rationally acting following some relational action model. The observed effect of past actions that led an agent to revise its action model can be communicated, upon request, to another agent, speeding up its own revision. We present a framework for such collaborative relational action model revision.


international conference on tools with artificial intelligence | 2016

Collaborative Decision in Multi-Agent Learning of Action Models

Christophe Rodrigues; Henry Soldano; Gauvain Bourgne; Céline Rouveirol

We address collaborative decision in the Multi-Agent Consistency-based online learning of relational action models. This framework considers a community of agents, each of them learning and rationally acting following their relational action model. It relies on the idea that when agents communicate, on a utility basis, the observed effect of past actions to other agents, this results in speeding up the online learning process of each agent in the community. In the present article, we discuss how collaboration in this framework can be extended to the individual decision level. More precisely, we first discuss how an agents ability to predict the effect of some action in its current state is enhanced when it takes into account all the action models in the community. Secondly, we consider the situation in which an agent fails to produce a plan using its own action model, and show how it can interact with the other agents in the community in order to select an appropriate action to perform. Such a community aided action selection strategy will help the agent revise its action model and increase its ability to reach its current goal as well as future ones.


Proceedings of the International Conference on Web Intelligence | 2017

Waves: a model of collective learning

Lise-Marie Veillon; Gauvain Bourgne; Henry Soldano

Collective learning considers how agents, in a community sharing a learning purpose, may benefit from exchanging hypotheses and observations to learn efficiently as a community as well as individuals. The community forms a communication network and each agent has access to observations. We address the question of a protocol, i.e. a set of agents behaviours, which guarantees the hypotheses retained by the agents take into account all the observations in the community. We present and investigate the protocol WAVES which displays such a guarantee in a turn-based scenario: at the beginning of each turn, agents collect new observations and interact until they all reach this consistency guarantee. We investigate and experiment WAVES on various network topologies and various experimental parameters. We present results on learning efficiency, in terms of computation and communication costs, as well as results on learning quality, in terms of predictive accuracy for a given number of observations collected by the community.


pacific rim international conference on multi agents | 2008

Multiagent Incremental Learning in Networks

Gauvain Bourgne; Amal El Fallah Seghrouchni; Nicolas Maudet; Henry Soldano

This paper investigates incremental multiagent learning in structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, different mechanisms have been proposed that allow groups of agents to coordinate their hypotheses. Although these mechanisms have been shown to guarantee (theoretically) convergence to globally consistent states of the system, others notions of effectiveness can be considered to assess their quality. Furthermore, this guaranteed property should not come at the price of a great loss of efficiency (for instance a prohibitive communication cost). We explore these questions theoretically and experimentally (using different boolean formulas learning problems).

Collaboration


Dive into the Gauvain Bourgne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsumi Inoue

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suzanne Pinson

Paris Dauphine University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Boissier

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Philippe Jaillon

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge