Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aurélie Beynier is active.

Publication


Featured researches published by Aurélie Beynier.


european conference on artificial intelligence | 2010

A Decision-Theoretic Approach to Cooperative Control and Adjustable Autonomy

Abdel-Illah Mouaddib; Shlomo Zilberstein; Aurélie Beynier; Laurent Jeanpierre

Cooperative control can help overcome the limitations of autonomous systems (AS) by introducing a supervision unit (SU) (human or another system) into the control loop and creating adjustable autonomy. We present a decision-theoretic approach to accomplish this using Mixed Markov Decision Processes (MI-MDPs). The solution is an optimal plan that tells the AS what actions to perform as well as when to request SU attention or transfer control to the SU. This provides a varying degree of autonomy, particularly suitable for robots exploring a domain with regions that are too complex or risky for autonomous operation, or intelligent vehicles operating in heavy traffic.


Autonomous Robots | 2016

Punctual versus continuous auction coordination for multi-robot and multi-task topological navigation

Guillaume Lozenguez; Lounis Adouane; Aurélie Beynier; Abdel-Illah Mouaddib; Philippe Martinet

This paper addresses the interest of using punctual versus continuous coordination for mobile multi-robot systems where robots use auction sales to allocate tasks between them and to compute their policies in a distributed way. In continuous coordination, one task at a time is assigned and performed per robot. In punctual coordination, all the tasks are distributed in Rendezvous phases during the mission execution. However, tasks allocation problem grows exponentially with the number of tasks. The proposed approach consists in two aspects: (1) a control architecture based on topological representation of the environment which reduces the planning complexity and (2) a protocol based on sequential simultaneous auctions (SSA) to coordinate Robots’ policies. The policies are individually computed using Markov Decision Processes oriented by several goal-task positions to reach. Experimental results on both real robots and simulation describe an evaluation of the proposed robot architecture coupled wih the SSA protocol. The efficiency of missions’ execution is empirically evaluated regarding continuous planning.


scalable uncertainty management | 2014

Solving Hidden-Semi-Markov-Mode Markov Decision Problems

Emmanuel Hadoux; Aurélie Beynier; Paul Weng

Hidden-Mode Markov Decision Processes HM-MDPs were proposed to represent sequential decision-making problems in non-stationary environments that evolve according to a Markov chain. We introduce in this paper Hidden-Semi-Markov-Mode Markov Decision Process es HS3MDPs, a generalization of HM-MDPs to the more realistic case of non-stationary environments evolving according to a semi-Markov chain. Like HM-MDPs, HS3MDPs form a subclass of Partially Observable Markov Decision Processes. Therefore, large instances of HS3MDPs and HM-MDPs can be solved using an online algorithm, the Partially Observable Monte Carlo Planning POMCP algorithm, based on Monte Carlo Tree Search exploiting particle filters for belief state approximation. We propose a first adaptation of POMCP to solve HS3MDPs more efficiently by exploiting their structure. Our empirical results show that the first adapted POMCP reaches higher cumulative rewards than the original algorithm. However, in larger instances, POMCP may run out of particles. To solve this issue, we propose a second adaptation of POMCP, replacing particle filters by exact representations of beliefs. Our empirical results indicate that this new version reaches high cumulative rewards faster than the former adapted POMCP and still remains efficient even for large problems.


Autonomous Agents and Multi-Agent Systems | 2011

Solving efficiently Decentralized MDPs with temporal and resource constraints

Aurélie Beynier; Abdel-Illah Mouaddib

Optimizing the operation of cooperative multi-agent systems that can deal with large and realistic problems has become an important focal area of research in the multi-agent community. In this paper, we first present a new model, the OC-DEC-MDP (Opportunity Cost Decentralized Markov Decision Process), that allows us to represent large multi-agent decision problems with temporal and precedence constraints. Then, we propose polynomial algorithms to efficiently solve problems formalized by OC-DEC-MDPs. The problems we deal with consist of a set of agents that have to execute a set of tasks in a cooperative way. The agents cannot communicate during task execution and they must respect resource and temporal constraints. Our approach is based on Decentralized Markov Decision Processes (DEC-MDPs) and uses the concept of opportunity cost borrowed from economics to obtain approximate control policies. Experimental results show that our approach produces good quality solutions for complex problems which are out of reach of existing approaches.


7th International Symposium on Distributed Autonomous Robotic System | 2007

Decentralized Markov Decision Processes for Handling Temporal and Resource constraints in a Multiple Robot System

Aurélie Beynier; Abdel-Illah Mouaddib

We consider in this paper a multi-robot planning system where robots realize a common mission with the following characteristics: the mission is an acyclic graph of tasks with dependencies and temporal window validity. Tasks are distributed among robots which have uncertain durations and resource consumptions to achieve tasks. This class of problems can be solved by using decision-theoretic planning techniques that are able to handle local temporal constraints and dependencies between robots allowing them to synchronize their processing. A specific decision model and a value function allow robots to coordinate their actions at runtime to maximize the overall value of the mission realization. For that, we design in this paper a cooperative multi-robot planning system using distributed Markov Decision Processes (MDPs) without communicating. Robots take uncertainty on temporal intervals and dependencies into consideration and use a distributed value function to coordinate the actions of robots.


Archive | 2011

Applications of DEC-MDPs in Multi-Robot Systems

Aurélie Beynier; Abdel-Illah Mouaddib

Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision problems are encountered while developing exploration rovers, teams of patrolling robots, rescue-robot colonies, mine-clearance robots, et cetera. In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.


international conference on game theory for networks | 2009

Decentralized decision making process for document server networks

Aurélie Beynier; Abdel-Illah Mouaddib

A peer-to-peer server network system consists of a large number of autonomous servers logically connected in a peer-to-peer way where each server maintains a collection of documents. When a query of storing new documents is received by the system, a distributed search process determines the most relevant servers and redirects the documents to them for processing (compressing and storing at the right document base). In this paper, we model this distributed search process as a distributed sequential decision making problem using a set of interactive Markov Decision Processes (MDP), a specific stochastic game approach, which represent each servers decision making problem. The relevance of a server to a document is regarded as a reward considering the capacity of the storage and the goodness score of a server. We show that using a central MDP to derive an optimal policy of how to distribute documents among servers leads to high complexity and is inappropriate to the distributed nature of the application. We present then interactive MDPs approach transforming this problem into a decentralized decision making process.


virtual reality software and technology | 2014

Towards real-time credible and scalable agent-based simulations of autonomous pedestrians navigation

Patrick Simo Kanmeugne; Aurélie Beynier

In this paper, we focus on real-time simulation of autonomous pedestrians navigation. We introduce a Macroscopic-Influenced Microscopic (MIM) approach which aims at reducing the gap between microscopic and macroscopic approaches by providing credible walking paths for a potentially highly congested crowd of autonomous pedestrians. Our approach originates from a least-effort formulation of the navigation task, which allows us to consistently account for congestion at every level of decision. We use the multi-agent paradigm and describe pedestrians as autonomous and situated agents who plan dynamically for energy efficient paths and interact with each other through the environment. The navigable space is considered as a set of contiguous resources that agents use to build their paths. We emulate the dynamic path computation for each agent with an evolutionary search algorithm, especially designed to be executed in real-time, individually and autonomously. We have compared an implementation of our approach with the ORCA model, on low density and high density scenarios, and obtained promising results in terms of credibility and scalability. We believe that ORCA model and other microscopic models could be easily extended to embrace our approach, thus providing richer simulations of potentially highly congested crowd of autonomous pedestrians.


international conference on intelligent autonomous systems | 2013

Interleaving Planning and Control of Mobiles Robots in Urban Environments Using Road-Map

Guillaume Lozenguez; Lounis Adouane; Aurélie Beynier; Abdel-Illah Mouaddib; Philippe Martinet

This paper presents a robot solution that allows to automatically reach a set of goals attributed to a robot. The challenge is to design autonomous robots assigned to perform missions without a predefined plan. We address the stochastic salesman problem where the goal is to visit a set of points of interest. A stochastic Road-Map is defined as a topological representation of an unstructured environment with uncertainty on the path achievement. The Road-Map allows us to split deliberation and reactive control. The proposed decision making uses a computation of Markov Decision Processes (MDPs) in order to plan all the reactive tasks to perform while there are goals not yet reached. Finally, from a brief explanation on how the approach could be extend to multi-robot missions, experiments in real conditions permit to evaluate the proposed architecture for multi-robot stochastic salesmen missions.


web intelligence | 2010

A Rich Communication Model in Opportunistic Decentralized Decision Making

Aurélie Beynier; Abdel-Illah Mouaddib

Communication is a natural way to improve coordination in multi-agent systems under decentralized control. It allows the agents to exchange local information, to increase their observability on the system and thus leading to higher performance. Recent works dealing with decentralized control in cooperative multiagent systems have shown a great interest in Decentralized Markov Decision Processes (DEC-MDPs). However, communication models that are proposed in DEC-MDPs make strong assumptions which seldom hold in realistic multiagent systems where the execution of the agents may be asynchronous, communication is time and resource consuming and may be restricted by temporal constraints. In this paper we propose an approach that allows us to formalize more complex and realistic communication decisions in DEC-MDPs with interaction graph. We assume a communication model where, at each decision step, each agent must be able to decide to communicate or not, which information to communicate and to whom. In order to make such decisions, we extend one of the most scalable decentralized decision model, the DEC-MDP with opportunity cost (OC-DEC-MDP). This new decision model allows us to assess the value of making decisions on when and what communicating and to whom, and to save the performance of OC-DEC-MDPs.

Collaboration


Dive into the Aurélie Beynier's collaboration.

Top Co-Authors

Avatar

Nicolas Maudet

Pierre-and-Marie-Curie University

View shared research outputs
Top Co-Authors

Avatar

Paul Weng

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Hadoux

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lounis Adouane

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Abdel-Illah Mouaddib

University of Caen Lower Normandy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Lesca

Paris Dauphine University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge