Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Prevete is active.

Publication


Featured researches published by Roberto Prevete.


Connection Science | 2012

Programming in the brain: a neural network theoretical framework

Francesco Donnarumma; Roberto Prevete; Giuseppe Trautteur

Recent research shows that some brain areas perform more than one task and the switching times between them are incompatible with learning and that parts of the brain are controlled by other parts of the brain, or are “recycled”, or are used and reused for various purposes by other neural circuits in different task categories and cognitive domains. All this is conducive to the notion of “programming in the brain”. In this paper, we describe a programmable neural architecture, biologically plausible on the neural level, and we implement, test, and validate it in order to support the programming interpretation of the above-mentioned phenomenology. A programmable neural network is a fixed-weight network that is endowed with auxiliary or programming inputs and behaves as any of a specified class of neural networks when its programming inputs are fed with a code of the weight matrix of a network of the class. The construction is based on the “pulling out” of the multiplication between synaptic weights and neuron outputs and having it performed in “software” by specialised multiplicative-response fixed subnetworks. Such construction has been tested for robustness with respect to various sources of noise. Theoretical underpinnings, analysis of related research, detailed construction schemes, and extensive testing results are given.


Brain Research | 2008

A connectionist architecture for view-independent grip-aperture computation.

Roberto Prevete; Giovanni Tessitore; Matteo Santoro; Ezio Catanzariti

This paper addresses the problem of extracting view-invariant visual features for the recognition of object-directed actions and introduces a computational model of how these visual features are processed in the brain. In particular, in the test-bed setting of reach-to-grasp actions, grip aperture is identified as a good candidate for inclusion into a parsimonious set of hand high-level features describing overall hand movement during reach-to-grasp actions. The computational model NeGOI (neural network architecture for measuring grip aperture in an observer-independent way) for extracting grip aperture in a view-independent fashion was developed on the basis of functional hypotheses about cortical areas that are involved in visual processing. An assumption built into NeGOI is that grip aperture can be measured from the superposition of a small number of prototypical hand shapes corresponding to predefined grip-aperture sizes. The key idea underlying the NeGOI model is to introduce view-independent units (VIP units) that are selective for prototypical hand shapes, and to integrate the output of VIP units in order to compute grip aperture. The distinguishing traits of the NEGOI architecture are discussed together with results of tests concerning its view-independence and grip-aperture recognition properties. The overall functional organization of NEGOI model is shown to be coherent with current functional models of the ventral visual stream, up to and including temporal area STS. Finally, the functional role of the NeGOI model is examined from the perspective of a biologically plausible architecture which provides a parsimonious set of high-level and view-independent visual features as input to mirror systems.


Adaptive Behavior | 2016

Learning programs is better than learning dynamics

Francesco Donnarumma; Roberto Prevete; Andrea de Giorgio; Guglielmo Montone; Giovanni Pezzulo

Distributed and hierarchical models of control are nowadays popular in computational modeling and robotics. In the artificial neural network literature, complex behaviors can be produced by composing elementary building blocks or motor primitives, possibly organized in a layered structure. However, it is still unknown how the brain learns and encodes multiple motor primitives, and how it rapidly reassembles, sequences and switches them by exerting cognitive control. In this paper we advance a novel proposal, a hierarchical programmable neural network architecture, based on the notion of programmability and an interpreter-programmer computational scheme. In this approach, complex (and novel) behaviors can be acquired by embedding multiple modules (motor primitives) in a single, multi-purpose neural network. This is supported by recent theories of brain functioning in which skilled behaviors can be generated by combining functional different primitives embedded in “reusable” areas of “recycled” neurons. Such neuronal substrate supports flexible cognitive control, too. Modules are seen as interpreters of behaviors having controlling input parameters, or programs that encode structures of networks to be interpreted. Flexible cognitive control can be exerted by a programmer module feeding the interpreters with appropriate input parameters, without modifying connectivity. Our results in a multiple T -maze robotic scenario show how this computational framework provides a robust, scalable and flexible scheme that can be iterated at different hierarchical layers permitting to learn, encode and control multiple qualitatively different behaviors.


Neural Networks | 2015

Neural networks with non-uniform embedding and explicit validation phase to assess Granger causality

Alessandro Montalto; Sebastiano Stramaglia; Luca Faes; Giovanni Tessitore; Roberto Prevete; Daniele Marinazzo

A challenging problem when studying a dynamical system is to find the interdependencies among its individual components. Several algorithms have been proposed to detect directed dynamical influences between time series. Two of the most used approaches are a model-free one (transfer entropy) and a model-based one (Granger causality). Several pitfalls are related to the presence or absence of assumptions in modeling the relevant features of the data. We tried to overcome those pitfalls using a neural network approach in which a model is built without any a priori assumptions. In this sense this method can be seen as a bridge between model-free and model-based approaches. The experiments performed will show that the method presented in this work can detect the correct dynamical information flows occurring in a system of time series. Additionally we adopt a non-uniform embedding framework according to which only the past states that actually help the prediction are entered into the model, improving the prediction and avoiding the risk of overfitting. This method also leads to a further improvement with respect to traditional Granger causality approaches when redundant variables (i.e. variables sharing the same information about the future of the system) are involved. Neural networks are also able to recognize dynamics in data sets completely different from the ones used during the training phase.


Behavioral and Brain Sciences | 2010

How and over what timescales does neural reuse actually occur

Francesco Donnarumma; Roberto Prevete; Giuseppe Trautteur

We isolate some critical aspects of the reuse notion in Andersons massive redeployment hypothesis (MRH). We notice that the actual rearranging of local neural circuits at a timescale comparable with the reactivity timescale of the organism is left open. We propose the concept of programmable neural network as a solution.


Physics of Life Reviews | 2015

The role of synergies within generative models of action execution and recognition: A computational perspective Comment on "Grasping synergies: A motor-control approach to the mirror neuron mechanism" by A. D'Ausilio et al.

Giovanni Pezzulo; Francesco Donnarumma; Pierpaolo Iodice; Roberto Prevete; Haris Dindo

Controlling the body – given its huge number of degrees of freedom – poses severe computational challenges. Mounting evidence suggests that the brain alleviates this problem by exploiting “synergies”, or patterns of muscle activities (and/or movement dynamics and kinematics) that can be combined to control action, rather than controlling individual muscles of joints [1–10]. D’Ausilio et al. [11] explain how this view of motor organization based on synergies can profoundly change the way we interpret studies of action recognition in humans and monkeys, and in particular the controversy on the “granularity” of the mirror neuron system (MNs): whether it encodes either (lower) kinematic aspects of movements, or (higher) goal representations, or both but at different hierarchical levels [12]. Here we offer a complementary, computational perspective on the role of synergies for action recognition and the MNs. In computational modeling and robotics, it is widely assumed that a control scheme using synergies simplifies movement planning and execution. This scheme permits to use elemental behaviors or primitives as “building blocks” to be composed (e.g., combined linearly, sequenced) to produce more complex behaviors, thus controlling relatively few degrees of freedom [13–15]. Do synergies yield equivalent benefits for action recognition? To answer this question from a computational viewpoint, we frame the concept of synergies within generative architectures of action execution and recognition [16–20]. According to two leading theories of motor control, optimal feedback control [21] and active inference [22], the motor system can be conceptualized as a (hierarchical) generative model, which encodes a (probabilistic) mapping between “task goals” specified at a higher level (e.g., grasping a cup) and states of the “plant” to be controlled (i.e.,


Cognitive Systems Research | 2011

Perceiving affordances: A computational investigation of grasping affordances

Roberto Prevete; Giovanni Tessitore; Ezio Catanzariti; Guglielmo Tamburrini

The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience.


Neurocomputing | 2015

A linear approach for sparse coding by a two-layer neural network

Alessandro Montalto; Giovanni Tessitore; Roberto Prevete

Abstract Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neural network. Importantly, the linearity of SCNN and the choice of the error function allow one to achieve reduced running time in the learning phase. The proposed architecture is evaluated on the basis of two standard machine learning tasks. Its performances are compared with those of recently proposed non-linear auto-associative neural networks. The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase.


international conference on adaptive and natural computing algorithms | 2011

A robotic scenario for programmable fixed-weight neural networks exhibiting multiple behaviors

Guglielmo Montone; Francesco Donnarumma; Roberto Prevete

Artificial neural network architectures are systems which usually exhibit a unique/special behavior on the basis of a fixed structure expressed in terms of parameters computed by a training phase. In contrast with this approach, we present a robotic scenario in which an artificial neural network architecture, the Multiple Behavior Network (MBN), is proposed as a robotic controller in a simulated environment. MBN is composed of two Continuous-Time Recurrent Neural Networks (CTRNNs), and is organized in a hierarchial way: Interpreter Module (IM) and Program Module (PM). IM is a fixed-weight CTRNN designed in such a way to behave as an interpreter of the signals coming from PM, thus being able to switch among different behaviors in response to the PM output programs. We suggest how such an MBN architecture can be incrementally trained in order to show and even acquire new behaviors by letting PM learn new programs, and without modifying IM structure.


Scientific Reports | 2018

Evidence for sparse synergies in grasping actions

Roberto Prevete; Francesco Donnarumma; Andrea d'Avella; Giovanni Pezzulo

Converging evidence shows that hand-actions are controlled at the level of synergies and not single muscles. One intriguing aspect of synergy-based action-representation is that it may be intrinsically sparse and the same synergies can be shared across several distinct types of hand-actions. Here, adopting a normative angle, we consider three hypotheses for hand-action optimal-control: sparse-combination hypothesis (SC) – sparsity in the mapping between synergies and actions - i.e., actions implemented using a sparse combination of synergies; sparse-elements hypothesis (SE) – sparsity in synergy representation – i.e., the mapping between degrees-of-freedom (DoF) and synergies is sparse; double-sparsity hypothesis (DS) – a novel view combining both SC and SE – i.e., both the mapping between DoF and synergies and between synergies and actions are sparse, each action implementing a sparse combination of synergies (as in SC), each using a limited set of DoFs (as in SE). We evaluate these hypotheses using hand kinematic data from six human subjects performing nine different types of reach-to-grasp actions. Our results support DS, suggesting that the best action representation is based on a relatively large set of synergies, each involving a reduced number of degrees-of-freedom, and that distinct sets of synergies may be involved in distinct tasks.

Collaboration


Dive into the Roberto Prevete's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ezio Catanzariti

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Giovanni Tessitore

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Matteo Santoro

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuseppe Trautteur

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guglielmo Montone

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guglielmo Tamburrini

University of Naples Federico II

View shared research outputs
Researchain Logo
Decentralizing Knowledge