Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolas Cuperlier is active.

Publication


Featured researches published by Nicolas Cuperlier.


Frontiers in Neurorobotics | 2013

From self-assessment to frustration, a small step toward autonomy in robotic navigation

Adrien Jauffret; Nicolas Cuperlier; Philippe Gaussier; Philippe Tarroux

Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behaviors quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations.


simulation of adaptive behavior | 2006

Transition cells for navigation and planning in an unknown environment

Nicolas Cuperlier; Mathias Quoy; Christophe Giovannangeli; Philippe Gaussier; Philippe Laroque

We present a navigation and planning system using vision for extracting non predefined landmarks, a dead-reckoning system generating the integrated movement and a topological map Localisation and planning remain possible even if the map is partially unknown An omnidirectional camera gives a panoramic images from which unpredefined landmarks are extracted The set of landmarks and their azimuths relative to a fixed orientation defines a particular location without any need of an external environment map Transitions between two locations recognized at time t and t-1 are explicitly coded, and define spatio-temporal transitions These transitions are the sensory-motor unit chosen to support planning During exploration, a topological map (our cognitive map) is learned on-line from these transitions without any cartesian coordinates nor occupancy grids The edges of this map may be modified in order to take into account dynamical changes of the environment The transitions are linked with the integrated movement used for moving from one place to the others When planning is required, the activities of transitions coding for the required goal in the cognitive map are enough to bias predicted transitions and to obtain the required movement.


Journal of Real-time Image Processing | 2015

Embedded and real-time architecture for bio-inspired vision-based robot navigation

Laurent Fiack; Nicolas Cuperlier; Benoı̂t Miramond

AbstractA recent trend in several robotics tasks is to consider nvision as the primary sense to perceive the environment or to interact with humans. Therefore, vision processing becomes a central and challenging matter for the design of real-time control architectures. We follow in this paper a biological inspiration to propose a real-time and embedded control system relying on visual attention to learn specific actions in each place recognized by our robot. Faced with a performance challenge, the attentional model allows to reduce vision processing to a few regions of the visual field. However, the computational complexity of the visual chain remains an issue for a processing system embedded onto an indoor robot. That is why we propose as the first part of our system, a full-hardware architecture prototyped onto reconfigurable devices to detect salient features at the camera frequency. The second part learns continuously these features in order to implement specific robotics tasks. This neural control layer is implemented as embedded software making the robot fully autonomous from a computation point of view. The integration of such a system onto the robot enables not only to accelerate the frame rate of the visual processing, to relieve the control architecture but also to compress the data-flow at the output of the camera, thus reducing communication and energy consumption. We present in this paper the complete embedded sensorimotor architecture and the experimental setup. The presented results demonstrate its real-time behavior in vision-based navigation tasks.


Paladyn | 2010

Cognitive map plasticity and imitation strategies to improve individual and social behaviors of autonomous agents

Philippe Laroque; Nathalie Gaussier; Nicolas Cuperlier; Mathias Quoy; Philippe Gaussier

Starting from neurobiological hypotheses on the existence of place cells (PC) in the brain, the aim of this article is to show how little assumptions at both individual and social levels can lead to the emergence of non-trivial global behaviors in a multi-agent system (MAS). In particular, we show that adding a simple, hebbian learning mechanism on a cognitive map allows autonomous, situated agents to adapt themselves in a dynamically changing environment, and that even using simple agent-following strategies (driven either by similarities in the agent movement, or by individual marks — “signatures” — in agents) can dramatically improve the global performance of the MAS, in terms of survival rate of the agents. Moreover, we show that analogies can be made between such a MAS and the emergence of certain social behaviors.


international work-conference on the interplay between natural and artificial computation | 2005

Transition cells and neural fields for navigation and planning

Nicolas Cuperlier; Mathias Quoy; Philippe Laroque; Philippe Gaussier

We have developped a mobile robot control system based on hippocampus and prefrontal models. We propose an alternative to models that rely on cognitive maps linking place cells. Our experiments show that using transition cells is more efficient than using place cells. The transition cell links two locations with the integrated direction used. Furthermore, it is possible to fuse the different directions proposed by nearby transitions and obstacles into an effective direction by using a Neural Field. The direction to follow is the stable fixed point of the Neural Field dynamics, and its derivative gives the angular rotation speed. Simulations and robotics experiments are carried out.


international conference on computer vision | 2010

Efficient neural models for visual attention

Sylvain Chevallier; Nicolas Cuperlier; Philippe Gaussier

Human vision rely on attention to select only a few regions to process and thus reduce the complexity and the processing time of visual task. Artificial vision systems can benefit from a bio-inspired attentional process relying on neural models. In such applications, what is the most efficient neural model: spiked-based or frequency-based? We propose an evaluation of both neural model, in term of complexity and quality of results (on artificial and natural images).


PLOS ONE | 2017

Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task

Marwen Belkaid; Nicolas Cuperlier; Philippe Gaussier

Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.


intelligent robots and systems | 2015

Emotional modulation of peripersonal space as a way to represent reachable and comfort areas

Marwen Belkaid; Nicolas Cuperlier; Philippe Gaussier

This work is based on the idea that, like in biological organisms, basic motivated behavior can be represented in terms of approach and avoidance. We propose a model for emotional modulation of the robot peripersonal space. That is to say, an area both reachable and secure; the space where the robot can act. The contribution of this paper is a generic model that integrates various stimuli to build a representation of reachable and comfort areas used to control robot behavior. Such an architecture is tested is three experiments using real robot and simulations. It is compared with two altered architecture versions. We show how our model allow the robot to solve various tasks, display emotionally colored behaviors and account for results from psychological studies.


international symposium on neural networks | 2013

How can a robot evaluate its own behavior? A neural model for self-assessment

Adrien Jauffret; Caroline Grand; Nicolas Cuperlier; Philippe Gaussier; Philippe Tarroux

Allowing a robot to autonomously navigate wide and unknown environments not only requires a set of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and monitoring strategies. Monitoring strategies requires feedbacks on the behaviors quality, from a given fitness system, to take correct decisions. In this work, we focus on how violations of expectations of such fitness system can be detected. Following an incremental and bio-mimetic approach, we first present two different sensorimotor strategies our robot can use to navigate: a Place Cells based strategy and a road following strategy. Then, we present a neural architecture that may be able to evaluate both navigation strategies. This model is based on an online novelty detection algorithm using a neural predictor. This neural predictor learns contingencies between sensations and actions, giving the expected sensation based from the previous perception. Prediction error, coming from surprising events, provides a direct measure of the quality of the underlying sensorimotor contingencies involved. We propose that this model might be a key structure toward self-assessment. We made several experiments that can account for such properties for both strategies.


international symposium on neural networks | 2016

Combining local and global visual information in context-based neurorobotic navigation

Marwen Belkaid; Nicolas Cuperlier; Philippe Gaussier

In robotic navigation, biologically inspired localization models have often exhibited interesting features and proven to be competitive with other solutions in terms of adaptability and performance. In general, place recognition systems rely on global or local visual descriptors; or both. In this paper, we propose a model of context-based place cells combining these two information. Global visual features are extracted to represent visual contexts. Based on the idea of global precedence, contexts drive a more refined recognition level which has local visual descriptors as an input. We evaluate this model on a robotic navigation dataset that we recorded in the outdoors. Thus, our contribution is twofold: 1) a bio-inspired model of context-based place recognition using neural networks; and 2) an evaluation assessing its suitability for applications on real robot by comparing it to 4 other architectures - 2 variants of the model and 2 stacking-based solutions - in terms of performance and computational cost. The context-based model gets the highest score based on the three metrics we consider - or is second to one of its variants. Moreover, a key feature makes the computational cost constant over time while it increases with the other methods. These promising results suggest that this model should be a good candidate for a robust place recognition in wide environments.

Collaboration


Dive into the Nicolas Cuperlier's collaboration.

Top Co-Authors

Avatar

Adrien Jauffret

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Philippe Tarroux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Benoit Miramond

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Benoit Miramond

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Philippe Tarroux

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge