Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon M. Stringer is active.

Publication


Featured researches published by Simon M. Stringer.


IEEE Network | 2006

Entorhinal cortex grid cells can map to hippocampal place cells by competitive learning

Edmund T. Rolls; Simon M. Stringer; Thomas Elliot

‘Grid cells’ in the dorsocaudal medial entorhinal cortex (dMEC) are activated when a rat is located at any of the vertices of a grid of equilateral triangles covering the environment. dMEC grid cells have different frequencies and phase offsets. However, cells in the dentate gyrus (DG) and hippocampal area CA3 of the rodent typically display place fields, where individual cells are active over only a single portion of the space. In a model of the hippocampus, we have shown that the connectivity from the entorhinal cortex to the dentate granule cells could allow the dentate granule cells to operate as a competitive network to recode their inputs to produce sparse orthogonal representations, and this includes spatial pattern separation. In this paper we show that the same computational hypothesis can account for the mapping of EC grid cells to dentate place cells. We show that the learning in the competitive network is an important part of the way in which the mapping can be achieved. We further show that incorporation of a short term memory trace into the associative learning can help to produce the relatively broad place fields found in the hippocampus.


Neural Computation | 2002

Invariant object recognition in the visual system with novel views of 3D objects

Simon M. Stringer; Edmund T. Rolls

To form view-invariant representations of objects, neurons in the inferior temporal cortex may associate together different views of an object, which tend to occur close together in time under natural viewing conditions. This can be achieved in neuronal network models of this process by using an associative learning rule with a short-term temporal memory trace. It is postulated that within a view, neurons learn representations that enable them to generalize within variations of that view. When three-dimensional (3D) objects are rotated within small angles (up to, e.g., 30 degrees), their surface features undergo geometric distortion due to the change of perspective. In this article, we show how trace learning could solve the problem of in-depth rotation-invariant object recognition by developing representations of the transforms that features undergo when they are on the surfaces of 3D objects. Moreover, we show that having learned how features on 3D objects transform geometrically as the object is rotated in depth, the network can correctly recognize novel 3D variations within a generic view of an object composed of a new combination of previously learned features. These results are demonstrated in simulations of a hierarchical network model (VisNet) of the visual system that show that it can develop representations useful for the recognition of 3D objects by forming perspective-invariant representations to allow generalization within a generic view.


Proceedings of the Royal Society of London B: Biological Sciences | 2002

A unified model of spatial and episodic memory.

Edmund T. Rolls; Simon M. Stringer; Thomas P. Trappenberg

Medial temporal lobe structures including the hippocampus are implicated by separate investigations in both episodic memory and spatial function. We show that a single recurrent attractor network can store both the discrete memories that characterize episodic memory and the continuous representations that characterize physical space. Combining both types of representation in a single network is actually necessary if objects and where they are located in space must be stored. We thus show that episodic memory and spatial theories of medial temporal lobe function can be combined in a unified model.


Network: Computation In Neural Systems | 2002

Self-organizing continuous attractor networks and path integration: two-dimensional models of place cells.

Simon M. Stringer; Edmund T. Rolls; Thomas P. Trappenberg; I.E.T. de Araujo

Single-neuron recording studies have demonstrated the existence of neurons in the hippocampus which appear to encode information about the place where a rat is located, and about the place at which a macaque is looking. We describe ‘continuous attractor’ neural network models of place cells with Gaussian spatial fields in which the recurrent collateral synaptic connections between the neurons reflect the distance between two places. The networks maintain a localized packet of neuronal activity that represents the place where the animal is located. We show for two related models how the representation of the two-dimensional space in the continuous attractor network of place cells could self-organize by modifying the synaptic connections between the neurons, and also how the place being represented can be updated by idiothetic (self-motion) signals in a neural implementation of path integration.


Biological Cybernetics | 2002

Invariant recognition of feature combinations in the visual system

Martin C. M. Elliffe; Edmund T. Rolls; Simon M. Stringer

Abstract. The operation of a hierarchical competitive network model (VisNet) of invariance learning in the visual system is investigated to determine how this class of architecture can solve problems that require the spatial binding of features. First, we show that VisNet neurons can be trained to provide transform-invariant discriminative responses to stimuli which are composed of the same basic alphabet of features, where no single stimulus contains a unique feature not shared by any other stimulus. The investigation shows that the network can discriminate stimuli consisting of sets of features which are subsets or supersets of each other. Second, a key feature-binding issue we address is how invariant representations of low-order combinations of features in the early layers of the visual system are able to uniquely specify the correct spatial arrangement of features in the overall stimulus and ensure correct stimulus identification in the output layer. We show that output layer neurons can learn new stimuli if the lower layers are trained solely through exposure to simpler feature combinations from which the new stimuli are composed. Moreover, we show that after training on the low-order feature combinations which are common to many objects, this architecture can – after training with a whole stimulus in some locations – generalise correctly to the same stimulus when it is shown in a new location. We conclude that this type of hierarchical model can solve feature-binding problems to produce correct invariant identification of whole stimuli.


Journal of Physiology-paris | 2006

Invariant visual object recognition: a model, with lighting invariance.

Edmund T. Rolls; Simon M. Stringer

How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.


Neural Networks | 2000

Position invariant recognition in the visual system with cluttered environments

Simon M. Stringer; Edmund T. Rolls

The effects of cluttered environments are investigated on the performance of a hierarchical multilayer model of invariant object recognition in the visual system (VisNet) that employs learning rules that utilise a trace of previous neural activity. This class of model relies on the spatio-temporal statistics of natural visual inputs to be able to associate together different exemplars of the same stimulus or object which will tend to occur in temporal proximity. In this paper the different exemplars of a stimulus are the same stimulus in different positions. First it is shown that if the stimuli have been learned previously against a plain background, then the stimuli can be correctly recognised even in environments with cluttered (e.g. natural) backgrounds which form complex scenes. Second it is shown that the functional architecture has difficulty in learning new objects if they are presented against cluttered backgrounds. It is suggested that processes such as the use of a high-resolution fovea, or attention, may be particularly useful in suppressing the effects of background noise and in segmenting objects from their background when new objects need to be learned. However, it is shown third that this problem may be ameliorated by the prior existence of stimulus tuned feature detecting neurons in the early layers of the VisNet, and that these feature detecting neurons may be set up through previous exposure to the relevant class of objects. Fourth we extend these results to partially occluded objects, showing that (in contrast with many artificial vision systems) correct recognition in this class of architecture can occur if the objects have been learned previously without occlusion.


Network: Computation In Neural Systems | 2001

Invariant object recognition in the visual system with error correction and temporal difference learning

Edmund T. Rolls; Simon M. Stringer

It has been proposed that invariant pattern recognition might be implemented using a learning rule that utilizes a trace of previous neural activity which, given the spatio-temporal continuity of the statistics of sensory input, is likely to be about the same object though with differing transforms in the short time scale. Recently, it has been demonstrated that a modified Hebbian rule which incorporates a trace of previous activity but no contribution from the current activity can offer substantially improved performance. In this paper we show how this rule can be related to error correction rules, and explore a number of error correction rules that can be applied to and can produce good invariant pattern recognition. An explicit relationship to temporal difference learning is then demonstrated, and from this further learning rules related to temporal difference learning are developed. This relationship to temporal difference learning allows us to begin to exploit established analyses of temporal difference learning to provide a theoretical framework for better understanding the operation and convergence properties of these learning rules, and more generally, of rules useful for learning invariant representations. The efficacy of these different rules for invariant object recognition is compared using VisNet, a hierarchical competitive network model of the operation of the visual system.


Progress in Neurobiology | 2000

On the design of neural networks in the brain by genetic evolution

Edmund T. Rolls; Simon M. Stringer

Hypotheses are presented of what could be specified by genes to enable the different functional architectures of the neural networks found in the brain to be built during ontogenesis. It is suggested that for each class of neuron (e.g., hippocampal CA3 pyramidal cells) a small number of genes specify the generic properties of that neuron class (e.g., the number of neurons in the class, and the firing threshold), while a larger number of genes specify the properties of the synapses onto that class of neuron from each of the other classes that makes synapses with it. These properties include not only which other neuron classes the synapses come from, but whether they are excitatory or inhibitory, the nature of the learning rule implemented at the synapse, and the initial strength of such synapses. In a demonstration of the feasibility of the hypotheses to specify the architecture of different types of neuronal network, a genetic algorithm is used to allow the evolution of genotypes which are capable of specifying neural networks that can learn to solve particular computational tasks, including pattern association, autoassociation, and competitive learning. This overall approach allows such hypotheses to be further tested, improved, and extended with the help of neuronal network simulations with genetically specified architectures in order to develop further our understanding of how the architecture and operation of different parts of brains are specified by genes, and how different parts of our brains have evolved to perform particular functions.


Network: Computation In Neural Systems | 2001

A model of the interaction between mood and memory.

Edmund T. Rolls; Simon M. Stringer

This paper investigates a neural network model of the interaction between mood and memory. The model has two attractor networks that represent the inferior temporal cortex (IT), which stores representations of visual stimuli, and the amygdala, the activity of which reflects the mood state. The two attractor networks are coupled by forward and backward projections. The model is however generic, and is relevant to understanding the interaction between different pairs of modules in the brain, particularly, as is the case with moods and memories, when there are fewer states represented in one module than in the other. During learning, a large number of patterns are presented to the IT, each paired with one of two mood states represented in the amygdala. The recurrent connections within each module, the forward connections from the memory module to the amygdala, and the backward connections from the amygdala to the memory module, are associatively modified. It is shown how the mood state in the amygdala can influence which memory patterns are recalled in the memory module. Further, it is shown that if there is an existing mood state in the amygdala, it can be difficult to change it even when a retrieval cue is presented to the memory module that is associated with a different mood state. It is also shown that the backprojections from the amygdala to the memory module must be relatively weak if memory retrieval in the memory module is not to be disrupted. The results are relevant to understanding the interaction between structures important in mood and emotion (such as the amygdala and orbitofrontal cortex) and other brain areas involved in storing objects and faces (such as the inferior temporal visual cortex) and memories (such as the hippocampus).

Collaboration


Dive into the Simon M. Stringer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge