Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimiliano Versace is active.

Publication


Featured researches published by Massimiliano Versace.


Brain Research | 2008

Spikes, synchrony, and attentive learning by laminar thalamocortical circuits

Stephen Grossberg; Massimiliano Versace

This article develops the Synchronous Matching Adaptive Resonance Theory (SMART) neural model to explain how the brain may coordinate multiple levels of thalamocortical and corticocortical processing to rapidly learn, and stably remember, important information about a changing world. The model clarifies how bottom-up and top-down processes work together to realize this goal, notably how processes of learning, expectation, attention, resonance, and synchrony are coordinated. The model hereby clarifies, for the first time, how the following levels of brain organization coexist to realize cognitive processing properties that regulate fast learning and stable memory of brain representations: single-cell properties, such as spiking dynamics, spike-timing-dependent plasticity (STDP), and acetylcholine modulation; detailed laminar thalamic and cortical circuit designs and their interactions; aggregate cell recordings, such as current source densities and local field potentials; and single-cell and large-scale inter-areal oscillations in the gamma and beta frequency domains. In particular, the model predicts how laminar circuits of multiple cortical areas interact with primary and higher-order specific thalamic nuclei and nonspecific thalamic nuclei to carry out attentive visual learning and information processing. The model simulates how synchronization of neuronal spiking occurs within and across brain regions, and triggers STDP. Matches between bottom-up adaptively filtered input patterns and learned top-down expectations cause gamma oscillations that support attention, resonance, learning, and consciousness. Mismatches inhibit learning while causing beta oscillations during reset and hypothesis testing operations that are initiated in the deeper cortical layers. The generality of learned recognition codes is controlled by a vigilance process mediated by acetylcholine.


Expert Systems With Applications | 2004

PREDICTING THE EXCHANGE TRADED FUND DIA WITH A COMBINATION OF GENETIC ALGORITHMS AND NEURAL NETWORKS

Massimiliano Versace; Rushi Bhatt; Oliver Hinds; Mark Shiffer

We evaluate the performance of a heterogeneous mixture of neural network algorithms for predicting the exchange-traded fund DIA. A genetic algorithm is utilized to find the best mixture of neural networks, the topology of individual networks in the ensemble, and to determine the features set. The Genetic Algorithm also determines the window size of the input time-series supplied to the individual classifiers in the mixture of experts. The mixtures of neural network experts consist of recurrent back-propagation networks, and Radial Basis Function networks. The application of Genetic Algorithm on the heterogeneous mixture of powerful neural network architectures shows promise for prediction of stock market time series. These highly non-linear, stochastic and highly non-stationary time series have been found to be notoriously difficult to predict using conventional linear statistical methods. In this paper, we propose a biologically inspired methodology to tackle such hard problems using a multi-faceted solution. q 2004 Elsevier Ltd. All rights reserved.


IEEE Computer | 2011

From Synapses to Circuitry: Using Memristive Memory to Explore the Electronic Brain

Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Sean Patrick; Benjamin Chandler; Anatoli Gorchetchnikov; Ennio Mingolla

In a synchronous digital platform for building large cognitive models, memristive nanodevices form dense, resistive memories that can be placed close to conventional processing circuitry. Through adaptive transformations, the devices can interact with the world in real time.


Neural Computing and Applications | 2012

CARTMAP: a neural network method for automated feature selection in financial time series forecasting

Charles Wong; Massimiliano Versace

In the past two decades, there has been much interest in applying neural networks to financial time series forecasting. Yet, there has been relatively little attention paid to selecting the input features for training these networks. This paper presents a novel CARTMAP neural network based on Adaptive Resonance Theory that incorporates automatic, intuitive, transparent, and parsimonious feature selection with fast learning. On average, over three separate 4-year simulations spanning 2004–2009 of Dow Jones Industrial Average stocks, CARTMAP outperformed related and classical alternatives. The alternatives were an industry standard random walk, a regression model, a general purpose ARTMAP, and ARTMAP with stepwise feature selection. This paper also discusses why the novel feature selection scheme outperforms the alternatives and how it can represent a step toward more transparency in financial modeling.


Journal of Computational Neuroscience | 2010

Running as fast as it can: How spiking dynamics form object groupings in the laminar circuits of visual cortex

Jasmin Léveillé; Massimiliano Versace; Stephen Grossberg

How spiking neurons cooperate to control behavioral processes is a fundamental problem in computational neuroscience. Such cooperative dynamics are required during visual perception when spatially distributed image fragments are grouped into emergent boundary contours. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity occur in response to binary spikes with irregular timing across many interacting cells. Some models have demonstrated spiking dynamics in recurrent laminar neocortical circuits, but not how perceptual grouping occurs. Other models have analyzed the fast speed of certain percepts in terms of a single feedforward sweep of activity, but cannot explain other percepts, such as illusory contours, wherein perceptual ambiguity can take hundreds of milliseconds to resolve by integrating multiple spikes over time. The current model reconciles fast feedforward with slower feedback processing, and binary spikes with analog network-level properties, in a laminar cortical network of spiking cells whose emergent properties quantitatively simulate parametric data from neurophysiological experiments, including the formation of illusory contours; the structure of non-classical visual receptive fields; and self-synchronizing gamma oscillations. These laminar dynamics shed new light on how the brain resolves local informational ambiguities through the use of properly designed nonlinear feedback spiking networks which run as fast as they can, given the amount of uncertainty in the data that they process.


Frontiers in Computational Neuroscience | 2012

Persistence and storage of activity patterns in spiking recurrent cortical networks: modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine.

Jesse Palma; Stephen Grossberg; Massimiliano Versace

Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970s showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network stabilizes.


Journal of Computational Neuroscience | 2012

After-hyperpolarization currents and acetylcholine control sigmoid transfer functions in a spiking cortical model

Jesse Palma; Massimiliano Versace; Stephen Grossberg

Recurrent networks are ubiquitous in the brain, where they enable a diverse set of transformations during perception, cognition, emotion, and action. It has been known since the 1970’s how, in rate-based recurrent on-center off-surround networks, the choice of feedback signal function can control the transformation of input patterns into activity patterns that are stored in short term memory. A sigmoid signal function may, in particular, control a quenching threshold below which inputs are suppressed as noise and above which they may be contrast enhanced before the resulting activity pattern is stored. The threshold and slope of the sigmoid signal function determine the degree of noise suppression and of contrast enhancement. This article analyses how sigmoid signal functions and their shape may be determined in biophysically realistic spiking neurons. Combinations of fast, medium, and slow after-hyperpolarization (AHP) currents, and their modulation by acetylcholine (ACh), can control sigmoid signal threshold and slope. Instead of a simple gain in excitability that was previously attributed to ACh, cholinergic modulation may cause translation of the sigmoid threshold. This property clarifies how activation of ACh by basal forebrain circuits, notably the nucleus basalis of Meynert, may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract information, as predicted by Adaptive Resonance Theory.


Attention Perception & Psychophysics | 2011

How do object reference frames and motion vector decomposition emerge in laminar cortical circuits

Stephen Grossberg; Jasmin Léveillé; Massimiliano Versace

How do spatially disjoint and ambiguous local motion signals in multiple directions generate coherent and unambiguous representations of object motion? Various motion percepts, starting with those of Duncker (Induced motion, 1929/1938) and Johansson (Configurations in event perception, 1950), obey a rule of vector decomposition, in which global motion appears to be subtracted from the true motion path of localized stimulus components, so that objects and their parts are seen as moving relative to a common reference frame. A neural model predicts how vector decomposition results from multiple-scale and multiple-depth interactions within and between the form- and motion-processing streams in V1–V2 and V1–MST, which include form grouping, form-to-motion capture, figure–ground separation, and object motion capture mechanisms. Particular advantages of the model are that these mechanisms solve the aperture problem, group spatially disjoint moving objects via illusory contours, capture object motion direction signals on real and illusory contours, and use interdepth directional inhibition to cause a vector decomposition, whereby the motion directions of a moving frame at a nearer depth suppress those directions at a farther depth, and thereby cause a peak shift in the perceived directions of object parts moving with respect to the frame.


international symposium on neural networks | 2011

Review and unification of learning framework in Cog Ex Machina platform for memristive neuromorphic hardware

Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi

Realizing adaptive brain functions subserving perception, cognition, and motor behavior on biological temporal and spatial scales remains out of reach for even the fastest computers. Newly introduced memristive hardware approaches open the opportunity to implement dense, low-power synaptic memories of up to 1015 bits per square centimeter. Memristors have the unique property of “remembering” the past history of their stimulation in their resistive state and do not require power to maintain their memory, making them ideal candidates to implement large arrays of plastic synapses supporting learning in neural models. Over the past decades, many learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. To ensure an optimal implementation of a large variety of learning rules in hardware, some general and easily parameterized form of learning rule must be designed. This general form learning equation would allow instantiation of multiple learning rules through different parameterizations, without rewiring the hardware. This paper characterizes a subset of local learning rules amenable to implementation in memristive hardware. The analyzed rules belong to four broad classes: Hebb rule derivatives with various methods for gating learning and decay, Threshold rule variations including the covariance and BCM families, Input reconstruction-based learning rules, and Explicit temporal trace-based rules.


International Journal of Neural Systems | 2010

The role of dopamine in the maintenance of working memory in prefrontal cortex neurons: input-driven versus internally-driven networks.

Massimiliano Versace; Marco Zorzi

How do organisms select and organize relevant sensory input in working memory (WM) in order to deal with constantly changing environmental cues? Once information has been stored in WM, how is it protected from and altered by the continuous stream of sensory input and internally generated planning? The present study proposes a novel role for dopamine (DA) in the maintenance of WM in the prefrontal cortex (Pfc) neurons that begins to address these issues. In particular, DA mediates the alternation of the Pfc network between input-driven and internally-driven states, which in turn drives WM updates and storage. A biologically inspired neural network model of Pfc is formulated to provide a link between the mechanisms of state switching and the biophysical properties of Pfc neurons. This model belongs to the recurrent competitive fields(33) class of dynamical systems which have been extensively mathematically characterized and exhibit the two functional states of interest: input-driven and internally-driven. This hypothesis was tested with two working memory tasks of increasing difficulty: a simple working memory task and a delayed alternation task. The results suggest that optimal WM storage in spite of noise is achieved with a phasic DA input followed by a lower DA sustained activity. Hypo and hyper-dopaminergic activity that alter this ideal pattern lead to increased distractibility from non-relevant pattern and prolonged perseverations on presented patterns, respectively.

Collaboration


Dive into the Massimiliano Versace's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge