Massimo Mascaro
Sapienza University of Rome
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Massimo Mascaro.
Vision Research | 2003
Yali Amit; Massimo Mascaro
We describe an architecture for invariant visual detection and recognition. Learning is performed in a single central module. The architecture makes use of a replica module consisting of copies of retinotopic layers of local features, with a particular design of inputs and outputs, that allows them to be primed either to attend to a particular location, or to attend to a particular object representation. In the former case the data at a selected location can be classified in the central module. In the latter case all instances of the selected object are detected in the field of view. The architecture is used to explain a number of psychophysical and physiological observations: object based attention, the different response time slopes of target detection among distractors, and observed attentional modulation of neuronal responses. We hypothesize that the organization of visual cortex in columns of neurons responding to the same feature at the same location may provide the copying architecture needed for translation invariance.
Neural Computation | 2001
Yali Amit; Massimo Mascaro
We describe a system of thousands of binary perceptrons with coarse-oriented edges as input that is able to recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feed-forward connections from the input layer and form a recurrent network among themselves. Each class is represented by a prelearned attractor (serving as an associative hook) in the recurrent net corresponding to a randomly selected subpopulation of the perceptrons. In training, first the attractor of the correct class is activated among the perceptrons; then the visual stimulus is presented at the input layer. The feedforward connections are modified using field-dependent Hebbian learning with positive synapses, which we show to be stable with respect to large variations in feature statistics and coding levels and allows the use of the same threshold on all perceptrons. Recognition is based on only the visual stimuli. These activate the recurrent network, which is then driven by the dynamics to a sustained attractor state, concentrated in the correct class subset and providing a form of working memory. We believe this architecture is more transparent than standard feedforward two-layer networks and has stronger biological analogies.
Network: Computation In Neural Systems | 1999
Massimo Mascaro; Daniel J. Amit
Collective behaviour of neural networks often divides the ensemble of neurons into sub-classes by neuron type; by selective synaptic potentiation; or by mode of stimulation. When the number of classes becomes larger than two, the analysis, even in a mean-field theory, loses its intuitive aspect because of the number of dimensions of the space of dynamical variables. Often one is interested in the behaviour of a reduced set of sub-populations (in focus) and in their dependence on the systems parameters, as in searching for coexistence of spontaneous activity and working memory; in the competition between different working memories; in the competition between working memory and a new stimulus; or in the interaction between selective activity in two different neural modules. For such cases we present a method for reducing the dimensionality of the system to one or two dimensions, even when the total number of populations involved is higher. In the reduced system the familiar intuitive tools apply and the analysis of the dependence of different network states on ambient parameters becomes transparent. Moreover, when the coding of states in focus is sparse, the computational complexity is much reduced. Beyond the analysis, we present a set of detailed examples. We conclude with a discussion of questions of stability in the reduced system.
Journal of Neuroscience Methods | 2005
David C. Bradley; Massimo Mascaro; Satish Santhakumar
We describe a relational database (RDB) structure suitable for trial-based experiments such as human psychophysics and neural recording studies in trained animals. An RDB is a collection of tables, each composed of columns. Some of the tables contain columns that reference specific columns of other tables. This referencing system links the tables to each other and makes it possible to extract any subset of the data with trivial commands. An equally important advantage of an RDB is that it imposes a consistent data format on applications that generate and analyze data. The result is a centralization and standardization of data storage that facilitates the pooling, cross-checking and re-analysis of data from various experiments. We present a robust RDB structure originally designed for neurophysiological data; however, it is abstract enough to accommodate data from a variety of trial-based experimental designs. Moreover, we demonstrated the advantages of this RDB structure and indicated its implementation in other laboratories.
Journal of Neurophysiology | 2005
David C. Bradley; P. R. Troyk; J. Berg; M. Bak; Stuart F. Cogan; Robert K. Erickson; C. Kufta; Massimo Mascaro; Douglas B. McCreery; E. M. Schmidt; Vernon L. Towle; Hong Xu
Cerebral Cortex | 2005
Alexandra Battaglia-Mayer; Massimo Mascaro; Emiliano Brunamonti; Roberto Caminiti
Cerebral Cortex | 2007
Alexandra Battaglia-Mayer; Massimo Mascaro; Roberto Caminiti
Cerebral Cortex | 2003
Massimo Mascaro; Alexandra Battaglia-Mayer; Lorenzo Nasi; Daniel J. Amit; Roberto Caminiti
Archive | 2013
Massimo Mascaro; David C. Bradley
Archive | 2016
David C. Bradley; Yao Huang Morin; Massimo Mascaro; Janis I. Intoy; Sean O'connor; Ellisha Marongelli; Nick Hilton