Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristiano Cuppini is active.

Publication


Featured researches published by Cristiano Cuppini.


Neural Networks | 2014

Neurocomputational approaches to modelling multisensory integration in the brain

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

The Brains ability to integrate information from different modalities (multisensory integration) is fundamental for accurate sensory experience and efficient interaction with the environment: it enhances detection of external stimuli, disambiguates conflict situations, speeds up responsiveness, facilitates processes of memory retrieval and object recognition. Multisensory integration operates at several brain levels: in subcortical structures (especially the Superior Colliculus), in higher-level associative cortices (e.g.,xa0posterior parietal regions), and even in early cortical areas (such as primary cortices) traditionally considered to be purely unisensory. Because of complex non-linear mechanisms of brain integrative phenomena, a key tool for their understanding is represented by neurocomputational models. This review examines different modelling principles and architectures, distinguishing the models on the basis of their aims: (i) Bayesian models based on probabilities and realizing optimal estimator of external cues; (ii) biologically inspired models of multisensory integration in the Superior Colliculus and in the Cortex, both at level of single neuron and network of neurons, with emphasis on physiological mechanisms and architectural schemes; among the latter, some models exhibit synaptic plasticity and reproduce development of integrative capabilities via Hebbian-learning rules or self-organizing maps; (iii) models of semantic memory that implement object meaning as a fusion between sensory-motor features (embodied cognition). This overview paves the way to future challenges, such as reconciling neurophysiological and Bayesian models into a unifying theory, and stimulates upcoming research in both theoretical and applicative domains.


NeuroImage | 2014

A Neurocomputational Analysis of the Sound-Induced Flash Illusion

Cristiano Cuppini; Elisa Magosso; Nadia Bolognini; Giuseppe Vallar; Mauro Ursino

Perception of the external world is based on the integration of inputs from different sensory modalities. Recent experimental findings suggest that this phenomenon is present in lower-level cortical areas at early processing stages. The mechanisms underlying these early processes and the organization of the underlying circuitries are still a matter of debate. Here, we investigate audiovisual interactions by means of a simple neural network consisting of two layers of visual and auditory neurons. We suggest that the spatial and temporal aspects of audio-visual illusions can be explained within this simple framework, based on two main assumptions: auditory and visual neurons communicate via excitatory synapses; and spatio-temporal receptive fields are different in the two modalities, auditory processing exhibiting a higher temporal resolution, while visual processing a higher spatial acuity. With these assumptions, the model is able: i) to simulate the sound-induced flash fission illusion; ii) to reproduce psychometric curves assuming a random variability in some parameters; iii) to account for other audio-visual illusions, such as the sound-induced flash fusion and the ventriloquism illusions; and iv) to predict that visual and auditory stimuli are combined optimally in multisensory integration. In sum, the proposed model provides a unifying summary of spatio-temporal audio-visual interactions, being able to both account for a wide set of empirical findings, and be a framework for future experiments. In perspective, it may be used to understand the neural basis of Bayesian audio-visual inference.


Neural Networks | 2015

A neural network for learning the meaning of objects and words from a featural representation

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

The present work investigates how complex semantics can be extracted from the statistics of input features, using an attractor neural network. The study is focused on how feature dominance and feature distinctiveness can be naturally coded using Hebbian training, and how similarity among objects can be managed. The model includes a lexical network (which represents word-forms) and a semantic network composed of several areas: each area is topologically organized (similarity) and codes for a different feature. Synapses in the model are created using Hebb rules with different values for pre-synaptic and post-synaptic thresholds, producing patterns of asymmetrical synapses. This work uses a simple taxonomy of schematic objects (i.e., a vector of features), with shared features (to realize categories) and distinctive features (to have individual members) with different frequency of occurrence. The trained network can solve simple object recognition tasks and object naming tasks by maintaining a distinction between categories and their members, and providing a different role for dominant features vs. marginal features. Marginal features are not evoked in memory when thinking of objects, but they facilitate the reconstruction of objects when provided as input. Finally, the topological organization of features allows the recognition of objects with some modified features.


Bilingualism: Language and Cognition | 2013

Learning the lexical aspects of a second language at different proficiencies: A neural computational study

Cristiano Cuppini; Elisa Magosso; Mauro Ursino

We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a single Lexical Network. Lexical and semantic aspects are then linked together during Hebbian learning phases by presenting L2 lexical items and their L1 translation equivalents. The model hypothesizes the existence of a competitive mechanism to solve conflicts and simulate language switching tasks. Results demonstrate that, at the beginning of training, an L2 lexicon must parasitize its L1 equivalent to access its conceptual meaning. At intermediate proficiency, L2 items may evoke their semantics independently of L1, but with a high risk of interference. At higher proficiency, the L2 representation becomes progressively similar to the L1 representation, according to Greens (2003) convergence hypothesis.


Neural Computation | 2017

Multisensory bayesian inference depends on synapse maturation during training: Theoretical analysis and neural modeling implementation

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.


Journal of Integrative Neuroscience | 2013

The formation of categories and the representation of feature saliency: analysis with a computational model trained with an Hebbian paradigm.

Mauro Ursino; Cristiano Cuppini; Elisa Magosso

An important issue in semantic memory models is the formation of categories and taxonomies, and the different role played by shared vs. distinctive and salient vs. marginal features. Aim of this work is to extend our previous model to critically discuss the mechanisms leading to the formation of categories, and to investigate how feature saliency can be learned from past experience. The model assumes that an object is represented as a collection of features, which belong to different cortical areas and are topologically organized. Excitatory synapses among features are created on the basis of past experience of object presentation, with a Hebbian paradigm, including the use of potentiation and depression of synapses, and thresholding for the presynaptic and postsynaptic. The model was trained using simple schematic objects as input (i.e., vector of features) having some shared features (so as to realize a simple category) and some distinctive features with different frequency. Three different taxonomies of objects were separately trained and tested, which differ as to the number of correlated features and the structure of categories. Results show that categories can be formed from past experience, using Hebbian rules with a different threshold for postsynaptic and presynaptic activity. Furthermore, features have a different saliency, as a consequence of their different frequency during training. The trained network is able to solve simple object recognition tasks, by maintaining a distinction between categories and individual members in the category, and providing a different role for salient features vs. not-salient features. In particular, not-salient features are not evoked in memory when thinking about the object, but they facilitate the reconstruction of objects when provided as input to the model. The results can provide indications on which neural mechanisms can be exploited to form robust categories among objects and on which mechanisms could be implemented in artificial connectionist systems to extract concepts and categories from a continuous stream of input objects (each represented as a vector of features).


Frontiers in Human Neuroscience | 2017

A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum

Cristiano Cuppini; Mauro Ursino; Elisa Magosso; Lars A. Ross; John J. Foxe; Sophie Molholm

Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a childs ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.


Frontiers in Computational Neuroscience | 2017

Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

Mauro Ursino; Andrea Crisafulli; Giuseppe di Pellegrino; Elisa Magosso; Cristiano Cuppini

The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity.


European Journal of Neuroscience | 2017

A biologically inspired neurocomputational model for audiovisual integration and causal inference

Cristiano Cuppini; Ladan Shams; Elisa Magosso; Mauro Ursino

Recently, experimental and theoretical research has focused on the brains abilities to extract information from a noisy sensory environment and how cross‐modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brains architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross‐modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory–visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory–visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.


Neuropsychologia | 2016

Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction

Elisa Magosso; Caterina Bertini; Cristiano Cuppini; Mauro Ursino

Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions.

Collaboration


Dive into the Cristiano Cuppini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuseppe Vallar

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar

Nadia Bolognini

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John J. Foxe

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Ladan Shams

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge