Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Protasi is active.

Publication


Featured researches published by Marco Protasi.


international symposium on neural networks | 1997

Suspiciousness of loading problems

P. Frasconi; Marco Gori; Stefano Fanelli; Marco Protasi

We introduce the notion of suspect families of loading problems in the attempt of formalizing situations in which classical learning algorithms based on local optimization are likely to fail (because of local minima or numerical precision problems). We show that any loading problem belonging to a nonsuspect family can be solved with optimal complexity by a canonical form of gradient descent with forced dynamics (i.e., for this class of problems no algorithm exhibits a better computational complexity than a slightly modified form of backpropagation). The analyses of this paper suggest intriguing links between the shape of the error surface attached to parametrical learning systems (like neural networks) and the computational complexity of the corresponding optimization problem.


international symposium on neural networks | 1998

Non-suspiciousness: a generalisation of convexity in the frame of foundations of numerical analysis and learning

Monica Bianchini; Stefano Fanelli; Marco Gori; Marco Protasi

The effectiveness of connectionist models in emulating intelligent behaviour is strictly related to the capability of the learning algorithms to find optimal or near-optimal solutions. In this paper, a canonical reduction of gradient descent dynamics is proposed, allowing the formulation of the neural network learning as a finite continuous optimisation problem, under some nonsuspiciousness conditions. In the linear case, the nonsuspect nature of the problem guarantees the implementation of an iterative method with O(n/sup 2/) as computational complexity. Finally, since nonsuspiciousness is a generalisation of the concept of convexity, it is possible to apply this theory to the resolution of nonlinear problems.


international symposium on neural networks | 1993

An efficient algorithm for the binary classification of patterns using MLP-networks

Stefano Fanelli; M. Di Martino; Marco Protasi

An alternative to the back propagation algorithm for the effective training of multi-layer perceptron (MLP)-networks with binary output is described. The algorithm determines the optimal set of weights by an iterative scheme based on the singular value decomposition (SVD) method and the Fletcher and Reeves version of the conjugate gradient method.<<ETX>>


international conference on mathematics of neural networks models algorithms and applications models algorithms and applications | 1997

Unimodal loading problems

Monica Bianchini; Stefano Fanelli; Marco Gori; Marco Protasi

This paper deals with optimal learning and provides a unified viewpoint of most significant results in the field. The focus is on the problem of local minima in the cost function that is likely to affect more or less any learning algorithm. We give some intriguing links between optimal learning and the computational complexity of loading problems. We exhibit a computational model such that the solution of all loading problems giving rise to unimodal error functions require the same time, thus suggesting that they belong to the same computational class.


Archive | 1994

Computational Experiences of New Direct Methods for the On-line Training of MLP-Networks with Binary Outputs

M. Di Martino; Stefano Fanelli; Marco Protasi

In this paper the authors consider some recent” direct methods” for the training of a three-layered feedforward neural network with thresholds: the Algorithm II and III (Barmann eal., 1992), here named FBFBK the Least Squares Backpropagation (LSB) Algorithm (Barmann et al., 1993), and their innovative method called Iterative Conjugate Gradient Singular Value Decomposition (ICGSV D) Algorithm (Di Martino et al., 1993).


IEEE Transactions on Neural Networks | 1996

Exploring and comparing the best "direct methods" for the efficient training of MLP-networks

M. Di Martino; Stefano Fanelli; Marco Protasi


international colloquium on automata languages and programming | 1988

On the Learnability of DNF Formulae

Ludek Kucera; Alberto Marchetti-Spaccamela; Marco Protasi


international symposium on neural networks | 1993

A new improved online algorithm for multi-decisional problems based on MLP-networks using a limited amount of information

M. Di Martino; Stefano Fanelli; Marco Protasi


Proceedings of SPIE | 1993

Off-line and on-line backpropagation methods with various levels of redundancy

Alessandra Di Medio; Stefano Fanelli; Marco Protasi


Archive | 1999

Complexity and approarirnation; combinatorial optimization problems and their approxirnability prope

Giorgio Ausiello; Pierluigi Crescenzi; Giorgio Gambosi; Viggo Kann; Alberto Marchetti-Spaccamela; Marco Protasi

Collaboration


Dive into the Marco Protasi's collaboration.

Top Co-Authors

Avatar

Stefano Fanelli

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

Giorgio Gambosi

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

Marco Gori

University of Florence

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Ricci

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandra Di Medio

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar

Giorgio Ausiello

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ludek Kucera

Charles University in Prague

View shared research outputs
Researchain Logo
Decentralizing Knowledge