Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laëtitia Matignon is active.

Publication


Featured researches published by Laëtitia Matignon.


intelligent robots and systems | 2010

Distributed control architecture for smart surfaces

K. Boutoustous; Guillaume J. Laurent; Eugen Dedu; Laëtitia Matignon; Julien Bourgeois; N. Le Fort-Piat

This paper presents a distributed control architecture to perform part recognition and closed-loop control of a distributed manipulation device. This architecture is based on decentralized cells able to communicate with their four neighbors thanks to peer-to-peer links. Various original algorithms are proposed to reconstruct, recognize and convey the object levitating on a new contactless distributed manipulation device. Experimental results show that each algorithm does a good job for itself and that all the algorithms together succeed in sorting and conveying the objects to their final destination. In the future, this architecture may be used to control MEMS-arrayed manipulation surfaces in order to develop Smart Surfaces, for conveying, fine positioning and sorting of very small parts for micro-systems assembly lines.


international conference on robotics and automation | 2012

Distributed value functions for multi-robot exploration

Laëtitia Matignon; Laurent Jeanpierre; Abdel-Illah Mouaddib

This paper addresses the problem of exploring an unknown area with a team of autonomous robots using decentralized decision making techniques. The localization aspect is not considered and it is assumed the robots share their positions and have access to a map updated with all explored areas. A key problem is then the coordination of decentralized decision processes: each individual robot must choose appropriate exploration goals so that the team simultaneously explores different locations of the environment. We formalize this problem as a Decentralized Markov Decision Process (Dec-MDP) solved as a set of individual MDPs, where interactions between MDPs are considered in a distributed value function. Thus each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team. Our technique has been implemented and evaluated in real-world and simulated experiments.


international conference on artificial neural networks | 2006

Reward function and initial values: better choices for accelerated goal-directed reinforcement learning

Laëtitia Matignon; Guillaume Laurent; Nadine Le Fort-Piat

An important issue in Reinforcement Learning (RL) is to accelerate or improve the learning process. In this paper, we study the influence of some RL parameters over the learning speed. Indeed, although RL convergence properties have been widely studied, no precise rules exist to correctly choose the reward function and initial Q-values. Our method helps the choice of these RL parameters within the context of reaching a goal in a minimal time. We develop a theoretical study and also provide experimental justifications for choosing on the one hand the reward function, and on the other hand particular initial Q-values based on a goal bias function.


Journal of Intelligent and Robotic Systems | 2010

Designing Decentralized Controllers for Distributed-Air-Jet MEMS-Based Micromanipulators by Reinforcement Learning

Laëtitia Matignon; Guillaume J. Laurent; Nadine Le Fort-Piat; Yves-André Chapuis

Distributed-air-jet MEMS-based systems have been proposed to manipulate small parts with high velocities and without any friction problems. The control of such distributed systems is very challenging and usual approaches for contact arrayed system don’t produce satisfactory results. In this paper, we investigate reinforcement learning control approaches in order to position and convey an object. Reinforcement learning is a popular approach to find controllers that are tailored exactly to the system without any prior model. We show how to apply reinforcement learning in a decentralized perspective and in order to address the global-local trade-off. The simulation results demonstrate that the reinforcement learning method is a promising way to design control laws for such distributed systems.


intelligent robots and systems | 2006

Improving Reinforcement Learning Speed for Robot Control

Laëtitia Matignon; Guillaume Laurent; N. Le Fort-Piat

Reinforcement learning (R-L) is an intuitive way of programming well-suited for use on autonomous robots because it does not need to specify how the task has to be achieved. However, RL remains difficult to implement in realistic domains because of its slowness in convergence. In this paper, we develop a theoretical study of the influence of some RL parameters over the learning speed. We also provide experimental justifications for choosing the reward function and initial Q-values in order to improve RL speed within the context of a goal-directed robot task


Procedia Computer Science | 2015

Modeling Biological Agents Beyond the Reinforcement-learning Paradigm☆

Olivier L. Georgeon; Rémi C. Casado; Laëtitia Matignon

Abstract It is widely acknowledged that biological beings (animals) are not Markov: modelers generally do not model them as agents receiving a complete representation of their environments state in input (except perhaps in simple controlled tasks). In this paper, we claim that biological beings generally cannot recognize rewarding Markov states of their environment either. Therefore, we model them as agents trying to perform rewarding interactions with their environment (interaction-driven tasks), but not as agents trying to reach rewarding states (state-driven tasks). We review two interaction-driven tasks: the AB and AABB task, and implement a non-Markov Reinforcement-Learning (RL) algorithm based upon historical sequences and Q-learning. Results show that this RL algorithm takes significantly longer than a constructivist algorithm implemented previously by Georgeon, Ritter, & Haynes (2009). This is because the constructivist algorithm directly learns and repeats hierarchical sequences of interactions, whereas the RL algorithm spends time learning Q-values. Along with theoretical arguments, these results support the constructivist paradigm for modeling biological agents.


intelligent robots and systems | 2009

Design of semi-decentralized control laws for distributed-air-jet micromanipulators by reinforcement learning

Laëtitia Matignon; Guillaume J. Laurent; Nadine Le Fort-Piat

Recently, a great deal of interest has been developed in learning in multi-agent systems to achieve decentralized control. Machine learning is a popular approach to find controllers that are tailored exactly to the system without any prior model. In this paper, we propose a semi-decentralized reinforcement learning control approach in order to position and convey an object on a contact-free MEMS-based distributed-manipulation system. The experimental results validate the semi-decentralized reinforcement learning method as a way to design control laws for such distributed systems.


international conference on tools with artificial intelligence | 2016

Incremental and Adaptive Multi-Robot Mapping for Human Scene Observation

Jonathan Cohen; Laëtitia Matignon; Olivier Simonin

This paper aims to use a fleet of mobile robots, each embedding a camera, to optimize the observation of a human dynamic scene. The scene is defined as a sequence of activities, performed by a person in a same place. Mobile robots have to cooperate to find a spatial configuration around the scene that maximizes the joint observation of the human pose skeleton. It is assumed that the robots can communicate but have no map of the environment and no external localisation. This paper presents a concentric navigation topology allowing to keep easily each robot camera towards the scene. This topology is combined with an incremental mapping of the environment in order to limit the complexity of the exploration state space. We also introduce the marginal contribution of each robot observation, to facilitate stability in the search, while the exploration is guided by a meta-heuristics. We developped a simulator that uses skeleton data from real human pose captures. It allows to compare the variants of the approach and to show its features such as adaptation to the dynamic of the scene and robustness to the noise in the observations.


Acta Polytechnica | 2015

Decentralized multi-robot planning to explore and perceive

Laëtitia Matignon; Laurent Jeanpierre; Abdel-Illah Mouaddib

In a recent French robotic contest, the objective was to develop a multi-robot system able to autonomously map and explore an unknown area while also detecting and localizing objects. As a participant in this challenge, we proposed a new decentralized Markov decision process (Dec-MDP) resolution based on distributed value functions (DVF) to compute multi-robot exploration strategies. The idea is to take advantage of sparse interactions by allowing each robot to calculate locally a strategy that maximizes the explored space while minimizing robots interactions. In this paper, we propose an adaptation of this method to improve also object recognition by integrating into the DVF the interest in covering explored areas with photos. The robots will then act to maximize the explored space and the photo coverage, ensuring better perception and object recognition.


european conference on mobile robots | 2017

Multi-robot human scene observation based on hybrid metric-topological mapping

Laëtitia Matignon; Stephane d'Alu; Olivier Simonin

This paper presents an hybrid metric-topological mapping for multi-robot observation of a human scene. The scene is defined as a set of body joints. Mobile robots have to cooperate to find a position around the scene that maximizes the number of observed joints. It is assumed that the robots can communicate but have no map of the environment. The map is updated cooperatively by exchanging only high-level data, thereby reducing the communication payload. The mapping is also realized in an incremental way to explore promising areas of the environment while keeping state-space complexity reasonable. We proposed an on-line distributed heuristic search combined to this hybrid mapping. We showed the efficiency of the approach on a fleet of three real robots, in particular its ability to quickly explore and find the team position maximizing the joint observation quality.

Collaboration


Dive into the Laëtitia Matignon's collaboration.

Top Co-Authors

Avatar

Olivier Simonin

Institut national des sciences Appliquées de Lyon

View shared research outputs
Top Co-Authors

Avatar

Guillaume Laurent

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nadine Le Fort-Piat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guillaume J. Laurent

University of Franche-Comté

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nadine Le Fort Piat

École nationale supérieure de mécanique et des microtechniques

View shared research outputs
Top Co-Authors

Avatar

N. Le Fort-Piat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge