Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elisa Magosso is active.

Publication


Featured researches published by Elisa Magosso.


Frontiers in Integrative Neuroscience | 2010

An Emergent Model of Multisensory Integration in Superior Colliculus Neurons

Cristiano Cuppini; Mauro Ursino; Elisa Magosso; Benjamin A. Rowland; Barry E. Stein

Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process.


IEEE Transactions on Neural Networks | 2009

Recognition of Abstract Objects Via Neural Oscillators: Interaction Among Topological Organization, Associative Memory and Gamma Band Synchronization

Mauro Ursino; Elisa Magosso; Cristiano Cuppini

Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects.


Journal of Computational Neuroscience | 2009

Multisensory integration in the superior colliculus: a neural network model

Mauro Ursino; Cristiano Cuppini; Elisa Magosso; Andrea Serino; Giuseppe di Pellegrino

Neurons in the superior colliculus (SC) are known to integrate stimuli of different modalities (e.g., visual and auditory) following specific properties. In this work, we present a mathematical model of the integrative response of SC neurons, in order to suggest a possible physiological mechanism underlying multisensory integration in SC. The model includes three distinct neural areas: two unimodal areas (auditory and visual) are devoted to a topological representation of external stimuli, and communicate via synaptic connections with a third downstream area (in the SC) responsible for multisensory integration. The present simulations show that the model, with a single set of parameters, can mimic various responses to different combinations of external stimuli including the inverse effectiveness, both in terms of multisensory enhancement and contrast, the existence of within- and cross-modality suppression between spatially disparate stimuli, a reduction of network settling time in response to cross-modal stimuli compared with individual stimuli. The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain several aspects of multisensory integration.


Biological Cybernetics | 2012

Hebbian mechanisms help explain development of multisensory integration in the superior colliculus: a neural network model

Cristiano Cuppini; Elisa Magosso; Benjamin A. Rowland; Barry E. Stein; Mauro Ursino

The superior colliculus (SC) integrates relevant sensory information (visual, auditory, somatosensory) from several cortical and subcortical structures, to program orientation responses to external events. However, this capacity is not present at birth, and it is acquired only through interactions with cross-modal events during maturation. Mathematical models provide a quantitative framework, valuable in helping to clarify the specific neural mechanisms underlying the maturation of the multisensory integration in the SC. We extended a neural network model of the adult SC (Cuppini etxa0al., Front Integr Neurosci 4:1–15, 2010) to describe the development of this phenomenon starting from an immature state, based on known or suspected anatomy and physiology, in which: (1) AES afferents are present but weak, (2) Responses are driven from non-AES afferents, and (3) The visual inputs have a marginal spatial tuning. Sensory experience was modeled by repeatedly presenting modality-specific and cross-modal stimuli. Synapses in the network were modified by simple Hebbian learning rules. As a consequence of this exposure, (1) Receptive fields shrink and come into spatial register, and (2) SC neurons gained the adult characteristic integrative properties: enhancement, depression, and inverse effectiveness. Importantly, the unique architecture of the model guided the development so that integration became dependent on the relationship between the cortical input and the SC. Manipulations of the statistics of the experience during the development changed the integrative profiles of the neurons, and results matched well with the results of physiological studies.


PLOS ONE | 2012

A Neural Network Model of Ventriloquism Effect and Aftereffect

Elisa Magosso; Cristiano Cuppini; Mauro Ursino

Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.


Experimental Brain Research | 2011

A Computational Study of Multisensory Maturation in the Superior Colliculus (SC)

Cristiano Cuppini; Barry E. Stein; Benjamin A. Rowland; Elisa Magosso; Mauro Ursino

Multisensory neurons in cat SC exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields (RFs) and cannot integrate information across sensory modalities. During the first several months of postnatal life RFs contract, responses become more robust and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al. in Front Integr Neurosci 22: 4–6, 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, cortical-SC synapses are present but not active and that responses are driven by non-cortical inputs with very large RFs. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. Cortical-SC synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that RFs are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli. Supported by NIH grants NS036916 and EY016716.


Cognitive Neurodynamics | 2011

An integrated neural model of semantic memory, lexical retrieval and category formation, based on a distributed feature representation

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

This work presents a connectionist model of the semantic-lexical system. Model assumes that the lexical and semantic aspects of language are memorized in two distinct stores, and are then linked together on the basis of previous experience, using physiological learning mechanisms. Particular characteristics of the model are: (1) the semantic aspects of an object are described by a collection of features, whose number may vary between objects. (2) Individual features are topologically organized to implement a similarity principle. (3) Gamma-band synchronization is used to segment different objects simultaneously. (4) The model is able to simulate the formation of categories, assuming that objects belong to the same category if they share some features. (5) Homosynaptic potentiation and homosynaptic depression are used within the semantic network, to create an asymmetric pattern of synapses; this allows a different role to be assigned to shared and distinctive features during object reconstruction. (6) Features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. (7) Features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that, after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurred during learning, according to a grounded cognition viewpoint. Several words and their conceptual description can coexist in the lexical-semantic system exploiting gamma-band time division. Schematic exempla are shown, to illustrate the possibility to distinguish between words representing a category, and words representing individual members and to evaluate the role of gamma-band synchronization in priming. Finally, the model is used to simulate patients with focalized lesions, assuming a damage of synaptic strength in specific feature areas. Results are critically discussed in view of future model extensions and application to real objects. The model represents an original effort to incorporate many basic ideas, found in recent conceptual theories, within a single quantitative scaffold.


Frontiers in Psychology | 2010

A computational model of the lexical-semantic system based on a grounded cognition approach.

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

This work presents a connectionist model of the semantic-lexical system based on grounded cognition. The model assumes that the lexical and semantic aspects of language are memorized in two distinct stores. The semantic properties of objects are represented as a collection of features, whose number may vary among objects. Features are described as activation of neural oscillators in different sensory-motor areas (one area for each feature) topographically organized to implement a similarity principle. Lexical items are represented as activation of neural groups in a different layer. Lexical and semantic aspects are then linked together on the basis of previous experience, using physiological learning mechanisms. After training, features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. The model also includes some inhibitory synapses: features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Moreover, different objects and the corresponding words can be simultaneously retrieved and segmented via a time division in the gamma-band. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurring during learning. The model simulates the formation of categories, assuming that objects belong to the same category if they share some features. Simple exempla are shown to illustrate how words representing a category can be distinguished from words representing individual members. Finally, the model can be used to simulate patients with focalized lesions, assuming an impairment of synaptic strength in specific feature areas.


Frontiers in Psychology | 2011

Organization, Maturation, and Plasticity of Multisensory Integration: Insights from Computational Modeling Studies

Cristiano Cuppini; Elisa Magosso; Mauro Ursino

In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.


BioSystems | 2009

A neural network model of semantic memory linking feature-based object representation and words.

Cristiano Cuppini; Elisa Magosso; Mauro Ursino

Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

Collaboration


Dive into the Elisa Magosso's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Serino

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge