Paolo Del Giudice
Istituto Superiore di Sanità
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paolo Del Giudice.
Neural Computation | 2000
Maurizio Mattia; Paolo Del Giudice
A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6 more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces.
PLOS Computational Biology | 2009
Guido Gigante; Maurizio Mattia; Jochen Braun; Paolo Del Giudice
We propose a novel explanation for bistable perception, namely, the collective dynamics of multiple neural populations that are individually meta-stable. Distributed representations of sensory input and of perceptual state build gradually through noise-driven transitions in these populations, until the competition between alternative representations is resolved by a threshold mechanism. The perpetual repetition of this collective race to threshold renders perception bistable. This collective dynamics – which is largely uncoupled from the time-scales that govern individual populations or neurons – explains many hitherto puzzling observations about bistable perception: the wide range of mean alternation rates exhibited by bistable phenomena, the consistent variability of successive dominance periods, and the stabilizing effect of past perceptual states. It also predicts a number of previously unsuspected relationships between observable quantities characterizing bistable perception. We conclude that bistable perception reflects the collective nature of neural decision making rather than properties of individual populations or neurons.
PLOS ONE | 2008
Daniel Martí; Gustavo Deco; Maurizio Mattia; Guido Gigante; Paolo Del Giudice
The spike activity of cells in some cortical areas has been found to be correlated with reaction times and behavioral responses during two-choice decision tasks. These experimental findings have motivated the study of biologically plausible winner-take-all network models, in which strong recurrent excitation and feedback inhibition allow the network to form a categorical choice upon stimulation. Choice formation corresponds in these models to the transition from the spontaneous state of the network to a state where neurons selective for one of the choices fire at a high rate and inhibit the activity of the other neurons. This transition has been traditionally induced by an increase in the external input that destabilizes the spontaneous state of the network and forces its relaxation to a decision state. Here we explore a different mechanism by which the system can undergo such transitions while keeping the spontaneous state stable, based on an escape induced by finite-size noise from the spontaneous state. This decision mechanism naturally arises for low stimulus strengths and leads to exponentially distributed decision times when the amount of noise in the system is small. Furthermore, we show using numerical simulations that mean decision times follow in this regime an exponential dependence on the amplitude of noise. The escape mechanism provides thus a dynamical basis for the wide range and variability of decision times observed experimentally.
The Journal of Neuroscience | 2013
Maurizio Mattia; Pierpaolo Pani; Giovanni Mirabella; Stefania Costa; Paolo Del Giudice; Stefano Ferraina
Cognitive functions like motor planning rely on the concerted activity of multiple neuronal assemblies underlying still elusive computational strategies. During reaching tasks, we observed stereotyped sudden transitions (STs) between low and high multiunit activity of monkey dorsal premotor cortex (PMd) predicting forthcoming actions on a single-trial basis. Occurrence of STs was observed even when movement was delayed or successfully canceled after a stop signal, excluding a mere substrate of the motor execution. An attractor model accounts for upward STs and high-frequency modulations of field potentials, indicative of local synaptic reverberation. We found in vivo compelling evidence that motor plans in PMd emerge from the coactivation of such attractor modules, heterogeneous in the strength of local synaptic self-excitation. Modules with strong coupling early reacted with variable times to weak inputs, priming a chain reaction of both upward and downward STs in other modules. Such web of “flip-flops” rapidly converged to a stereotyped distributed representation of the motor program, as prescribed by the long-standing theory of associative networks.
Frontiers in Neuroscience | 2012
Massimiliano Giulioni; Patrick Camilleri; Maurizio Mattia; Vittorio Dante; Jochen Braun; Paolo Del Giudice
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.
Neural Computation | 2009
Massimiliano Giulioni; Mario Pannunzi; Davide Badoni; Vittorio Dante; Paolo Del Giudice
We describe the implementation and illustrate the learning performance of an analog VLSI network of 32 integrate-and-fire neurons with spike-frequency adaptation and 2016 Hebbian bistable spike-driven stochastic synapses, endowed with a self-regulating plasticity mechanism, which avoids unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and external connectivity with address-event representation compliant devices. We demonstrate a marked improvement in the efficiency of the network in classifying correlated patterns, owing to the self-regulating mechanism.
Network: Computation In Neural Systems | 1995
Alessandro Campa; Paolo Del Giudice; Néstor Parga; Jean-Pierre Nadal
We consider a linear, one-layer feedforward neural network performing a coding task. The goal of the network is to provide a statistical neural representation that conveys as much information as possible on the input stimuli in noisy conditions. We determine the family of synaptic couplings that maximizes the mutual information between input and output distribution. Optimization is performed under different constraints on the synaptic efficacies. We analyse the dependence of the solutions on input and output noises. This work goes beyond previous studies of the same problem in that: (i) we perform a detailed stability analysis in order to find the global maxima of the mutual information; (ii) we examine the properties of the optimal synaptic configurations under different constraints; (iii) and we do not assume translational invariance of the input data, as it is usually done when inputs are assumed to be visual stimuli.
Scientific Reports | 2015
Roni Hogri; Simeon A. Bamford; Aryeh H. Taub; Ari Magal; Paolo Del Giudice; Matti Mintz
Neuroprostheses could potentially recover functions lost due to neural damage. Typical neuroprostheses connect an intact brain with the external environment, thus replacing damaged sensory or motor pathways. Recently, closed-loop neuroprostheses, bidirectionally interfaced with the brain, have begun to emerge, offering an opportunity to substitute malfunctioning brain structures. In this proof-of-concept study, we demonstrate a neuro-inspired model-based approach to neuroprostheses. A VLSI chip was designed to implement essential cerebellar synaptic plasticity rules, and was interfaced with cerebellar input and output nuclei in real time, thus reproducing cerebellum-dependent learning in anesthetized rats. Such a model-based approach does not require prior system identification, allowing for de novo experience-based learning in the brain-chip hybrid, with potential clinical advantages and limitations when compared to existing parametric “black box” models.
NeuroImage | 2010
Maurizio Mattia; Stefano Ferraina; Paolo Del Giudice
Local field potentials (LFP) and multi-unit activity (MUA) recorded in vivo are known to convey different information about the underlying neural activity. Here we extend and support the idea that single-electrode LFP-MUA task-related modulations can shed light on the involved large-scale, multi-modular neural dynamics. We first illustrate a theoretical scheme and associated simulation evidence, proposing that in a multi-modular neural architecture local and distributed dynamic properties can be extracted from the local spiking activity of one pool of neurons in the network. From this new perspective, the spectral features of the field potentials reflect the time structure of the ongoing fluctuations of the probed local neuronal pool on a wide frequency range. We then report results obtained recording from the dorsal premotor (PMd) cortex of monkeys performing a countermanding task, in which a reaching movement is performed, unless a visual stop signal is presented. We find that the LFP and MUA spectral components on a wide frequency band (3-2000 Hz) are very differently modulated in time for successful reaching, successful and wrong stop trials, suggesting an interplay of local and distributed components of the underlying neural activity in different periods of the trials and for different behavioural outcomes. Besides, the MUA spectral power is shown to possess a time-dependent structure, which we suggest could help in understanding the successive involvement of different local neuronal populations. Finally, we compare signals recorded from PMd and dorso-lateral prefrontal (PFCd) cortex in the same experiment, and speculate that the comparative time-dependent spectral analysis of LFP and MUA can help reveal patterns of functional connectivity in the brain.
Scientific Reports | 2015
Massimiliano Giulioni; Federico Corradi; Vittorio Dante; Paolo Del Giudice
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.