Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jérémy Fix is active.

Publication


Featured researches published by Jérémy Fix.


Cognitive Computation | 2011

A Dynamic Neural Field Approach to the Covert and Overt Deployment of Spatial Attention

Jérémy Fix; Nicolas P. Rougier; Frédéric Alexandre

The visual exploration of a scene involves the interplay of several competing processes (for example to select the next saccade or to keep fixation) and the integration of bottom-up (e.g. contrast) and top-down information (the target of a visual search task). Identifying the neural mechanisms involved in these processes and in the integration of these information remains a challenging question. Visual attention refers to all these processes, both when the eyes remain fixed (covert attention) and when they are moving (overt attention). Popular computational models of visual attention consider that the visual information remains fixed when attention is deployed while the primates are executing around three saccadic eye movements per second, changing abruptly this information. We present in this paper a model relying on neural fields, a paradigm for distributed, asynchronous and numerical computations and show that covert and overt attention can emerge from such a substratum. We identify and propose a possible interaction of four elementary mechanisms for selecting the next locus of attention, memorizing the previously attended locations, anticipating the consequences of eye movements and integrating bottom-up and top-down information in order to perform a visual search task with saccadic eye movements.


Network: Computation In Neural Systems | 2012

DANA: Distributed (asynchronous) Numerical and Adaptive modelling framework

Nicolas P. Rougier; Jérémy Fix

DANA is a python framework (http://dana.loria.fr) whose computational paradigm is grounded on the notion of a unit that is essentially a set of time dependent values varying under the influence of other units via adaptive weighted connections. The evolution of a units value are defined by a set of differential equations expressed in standard mathematical notation which greatly ease their definition. The units are organized into groups that form a model. Each unit can be connected to any other unit (including itself) using a weighted connection. The DANA framework offers a set of core objects needed to design and run such models. The modeler only has to define the equations of a unit as well as the equations governing the training of the connections. The simulation is completely transparent to the modeler and is handled by DANA. This allows DANA to be used for a wide range of numerical and distributed models as long as they fit the proposed framework (e.g. cellular automata, reaction-diffusion system, decentralized neural networks, recurrent neural networks, kernel-based image processing, etc.).


simulation of adaptive behavior | 2007

A Distributed Computational Model of Spatial Memory Anticipation During a Visual Search Task

Jérémy Fix; Julien Vitay; Nicolas P. Rougier

Some visual search tasks require the memorization of the location of stimuli that have been previously focused. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastic changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of eye movements on visual perception in order to update a spatial working memory.


Journal of Physiology-paris | 2007

From physiological principles to computational models of the cortex

Jérémy Fix; Nicolas P. Rougier; Frédéric Alexandre

Understanding the brain goes through the assimilation of an increasing amount of biological data going from single cell recording to brain imaging studies and behavioral analysis. The description of cognition at these three levels provides us with a grid of analysis that can be exploited for the design of computational models. Beyond data related to specific tasks to be emulated by models, each of these levels also lays emphasis on principles of computation that must be obeyed to really implement biologically inspired computations. Similarly, the advantages of such a joint approach are twofold: computational models are a powerful tool to experiment brain theories and assess them on the implementation of realistic tasks, such as visual search tasks. They are also a way to explore and exploit an original formalism of asynchronous, distributed and adaptive computations with such precious properties as self-organization, emergence, robustness and more generally abilities to cope with an intelligent interaction with the world. In this article, we first discuss three levels at which a cortical circuit might be observed to provide a modeler with sufficient information to design a computational model and illustrate this principle with an application to the control of visual attention.


Neural Networks | 2013

Template based black-box optimization of dynamic neural fields

Jérémy Fix

Due to their strong non-linear behavior, optimizing the parameters of dynamic neural fields is particularly challenging and often relies on expert knowledge and trial and error. In this paper, we study the ability of particle swarm optimization (PSO) and covariance matrix adaptation (CMA-ES) to solve this problem when scenarios specifying the input feeding the field and desired output profiles are provided. A set of spatial lower and upper bounds, called templates are introduced to define a set of desired output profiles. The usefulness of the method is illustrated on three classical scenarios of dynamic neural fields: competition, working memory and tracking.


SIDE'12 Proceedings of the 2012 international conference on Swarm and Evolutionary Computation | 2012

Monte-Carlo swarm policy search

Jérémy Fix; Matthieu Geist

Finding optimal controllers of stochastic systems is a particularly challenging problem tackled by the optimal control and reinforcement learning communities. A classic paradigm for handling such problems is provided by Markov Decision Processes. However, the resulting underlying optimization problem is difficult to solve. In this paper, we explore the possible use of Particle Swarm Optimization to learn optimal controllers and show through some non-trivial experiments that it is a particularly promising lead.


10th International Workshop, WSOM | 2014

Dynamic Formation of Self-Organizing Maps

Jérémy Fix

In this paper, an original dynamical system derived from dynamic neural fields is studied in the context of the formation of topographic maps. This dynamical system overcomes limitations of the original Self-Organizing Map (SOM) model of Kohonen. Both competition and learning are driven by dynamical systems and performed continuously in time. The equations governing competition are shown to be able to reconsider dynamically their decision through a mechanism rendering the current decision unstable, which allows to avoid the use of a global reset signal.


The 1st International Conference on Cognitive Neurodynamics : ICCN 2007 | 2008

A Computational Approach to the Control of Voluntary Saccadic Eye Movements

Jérémy Fix

We present a computational model of how several brain areas involved in the control of voluntary saccadic eye movements might cooperate. This model is based on anatomical and physiological considerations and lays the emphasis on the temporal evolution of the activities in each of these areas, and their potential functional role in the control of saccades.


2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB) | 2011

Dynamic neural field optimization using the unscented Kalman filter

Jérémy Fix; Matthieu Geist; Olivier Pietquin; Hervé Frezza-Buet

Dynamic neural fields have been proposed as a continuous model of a neural tissue. When dynamic neural fields are used in practical applications, the tuning of their parameters is a challenging issue that most of the time relies on expert knowledge on the influence of each parameter. The methods that have been proposed so far for automatically tuning these parameters rely either on genetic algorithms or on gradient descent. The second category of methods requires to explicitly compute the gradient of a cost function which is not always possible or at least difficult and costly. Here we propose to use unscented Kalman filters, a derivative-free algorithm for parameter estimation, which reveals to efficiently optimize the parameters of a dynamic neural field.


KI | 2009

Biological Models of Reinforcement Learning.

Julien Vitay; Jérémy Fix; Fred H. Hamker; Henning Schroll; Frederik Beuth

Collaboration


Dive into the Jérémy Fix's collaboration.

Top Co-Authors

Avatar

Frédéric Alexandre

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Julien Vitay

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Frédéric Alexandre

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Hervé Frezza-Buet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Fred H. Hamker

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Henning Schroll

Chemnitz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hervé Frezza-Buet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Frederik Beuth

Chemnitz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge