Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franco Scarselli is active.

Publication


Featured researches published by Franco Scarselli.


Neural Networks | 1998

Universal approximation using feedforward neural networks: a survey of some existing methods, and some new results

Franco Scarselli; Ah Chung Tsoi

In this paper, we present a review of some recent works on approximation by feedforward neural networks. A particular emphasis is placed on the computational aspects of the problem, i.e. we discuss the possibility of realizing a feedforward neural network which achieves a prescribed degree of accuracy of approximation, and the determination of the number of hidden layer neurons required to achieve this accuracy. Furthermore, a unifying framework is introduced to understand existing approaches to investigate the universal approximation problem using feedforward neural networks. Some new results are also presented. Finally, two training algorithms are introduced which can determine the weights of feedforward neural networks, with sigmoidal activation neurons, to any degree of prescribed accuracy. These training algorithms are designed so that they do not suffer from the problems of local minima which commonly affect neural network learning algorithms.


IEEE Transactions on Neural Networks | 2009

The Graph Neural Network Model

Franco Scarselli; Marco Gori; Ah Chung Tsoi; Markus Hagenbuchner; Gabriele Monfardini

Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Are multilayer perceptrons adequate for pattern recognition and verification

Marco Gori; Franco Scarselli

Discusses the ability of multilayer perceptrons (MLPs) to model the probability distribution of data in typical pattern recognition and verification problems. It is proven that multilayer perceptrons with sigmoidal units and a number of hidden units less or equal than the number of inputs are unable to model patterns distributed in typical clusters, since these networks draw open separation surfaces in the pattern space. When using more hidden units than inputs, the separation surfaces can be closed but, unfortunately it is proven that determining whether or not a MLP draws closed separation surfaces in the pattern space is NP-hard. The major conclusion of the paper is somewhat opposite to what is believed and reported in many application papers: MLPs are definitely not adequate for applications of pattern recognition requiring a reliable rejection and, especially, they are not adequate for pattern verification tasks.


IEEE Transactions on Neural Networks | 2014

On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures

Monica Bianchini; Franco Scarselli

Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.


international symposium on neural networks | 2005

A new model for learning in graph domains

Marco Gori; G. Monfardini; Franco Scarselli

In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.


IEEE Transactions on Neural Networks | 2009

Computational Capabilities of Graph Neural Networks

Franco Scarselli; Marco Gori; Ah Chung Tsoi; Markus Hagenbuchner; Gabriele Monfardini

In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G, n) isin R m that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.


international world wide web conferences | 2003

Adaptive ranking of web pages

Ah Chung Tsoi; Gianni Morini; Franco Scarselli; Markus Hagenbuchner; Marco Maggini

In this paper, we consider the possibility of altering the PageRank of web pages, from an administrators point of view, through the modification of the PageRank equation. It is shown that this problem can be solved using the traditional quadratic programming techniques. In addition, it is shown that the number of parameters can be reduced by clustering web pages together through simple clustering techniques. This problem can be formulated and solved using quadratic programming techniques. It is demonstrated experimentally on a relatively large web data set, viz., the WT10G, that it is possible to modify the PageRanks of the web pages through the proposed method using a set of linear constraints. It is also shown that the PageRank of other pages may be affected; and that the quality of the result depends on the clustering technique used. It is shown that our results compared well with those obtained by a HITS based method.


Neural Networks | 2005

2005 Special Issue: Recursive neural networks for processing graphs with labelled edges: theory and applications

Monica Bianchini; Marco Maggini; Lorenzo Sarti; Franco Scarselli

In this paper, we introduce a new recursive neural network model able to process directed acyclic graphs with labelled edges. The model uses a state transition function which considers the edge labels and is independent both from the number and the order of the children of each node. The computational capabilities of the new recursive architecture are assessed. Moreover, in order to test the proposed architecture on a practical challenging application, the problem of object detection in images is also addressed. In fact, the localization of target objects is a preliminary step in any recognition system. The proposed technique is general and can be applied in different detection systems, since it does not exploit any a priori knowledge on the particular problem. Some experiments on face detection, carried out on scenes acquired by an indoor camera, are reported, showing very promising results.


web intelligence | 2005

Graph Neural Networks for Ranking Web Pages

Franco Scarselli; Sweah Liang Yong; Marco Gori; Markus Hagenbuchner; Ah Chung Tsoi; Marco Maggini

An artificial neural network model, capable of processing general types of graph structured data, has recently been proposed. This paper applies the new model to the computation of customised page ranks problem in the World Wide Web. The class of customised page ranks that can be implemented in this way is very general and easy because the neural network model is learned by examples. Some preliminary experimental findings show that the model generalizes well over unseen Web pages, and hence, may be suitable for the task of page rank computation on a large Web graph.


International Workshop of the Initiative for the Evaluation of XML Retrieval | 2006

Document Mining Using Graph Neural Network

Sweah Liang Yong; Markus Hagenbuchner; Ah Chung Tsoi; Franco Scarselli; Marco Gori

The Graph Neural Network is a relatively new machine learning method capable of encoding data as well as relationships between data elements. This paper applies the Graph Neural Network for the first time to a given learning task at an international competition on the classification of semi-structured documents. Within this setting, the Graph Neural Network is trained to encode and process a relatively large set of XML formatted documents. It will be shown that the performance using the Graph Neural Network approach significantly outperforms the results submitted by the best competitor.

Collaboration


Dive into the Franco Scarselli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ah Chung Tsoi

Macau University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge