Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone G. O. Fiori is active.

Publication


Featured researches published by Simone G. O. Fiori.


Neural Computation | 2001

A Theory for Learning by Weight Flow on Stiefel-Grassman Manifold

Simone G. O. Fiori

Recently we introduced the concept of neural network learning on Stiefel-Grassman manifold for multilayer perceptronlike networks. Contributions of other authors have also appeared in the scientific literature about this topic. This article presents a general theory for it and illustrates how existing theories may be explained within the general framework proposed here.


Image and Vision Computing | 2001

Image compression using principal component neural networks

Saverio Costa; Simone G. O. Fiori

Principal component analysis (PCA) is a well-known statistical processing technique that allows to study the correlations among the components of multivariate data and to reduce redundancy by projecting the data over a proper basis. The PCA may be performed both in a batch method and in a recursive fashion; the latter method has been proven to be very effective in presence of high dimension data, as in image compression. The aim of this paper is to present a comparison of principal component neural networks for still image compression and coding. We first recall basic concepts related to neural PCA, then we recall from the scientific literature a number of principal component networks, and present comparisons about the structures, the learning algorithms and the required computational efforts, along with a discussion of the advantages and drawbacks related to each technique. The conclusion of our wide comparison among eight principal component networks is that the cascade recursive least-squares algorithm by Ci-chocki, Kasprzak and Skarbek exhibits the best numerical and structural properties. q 2001 Elsevier Science B.V. All rights reserved.


Neural Networks | 2000

Blind signal processing by the adaptive activation function neurons

Simone G. O. Fiori

The aim of this paper is to study an Information Theory based learning theory for neural units endowed with adaptive activation functions. The learning theory has the target to force the neuron to approximate the input-output transference that makes it flat (uniform) the probability density function of its output or, equivalently, that maximizes the entropy of the neuron response. Then, a network of adaptive activation function neurons is studied, and the effectiveness of the new structure is tested on Independent Component Analysis (ICA) problems. The new ICA neural algorithm is compared with the closely related Mixture of Densities (MOD) technique by Xu et al.. Both simulation results and structural comparison show the new method is effective and more efficient in computational complexity.


Signal Processing | 2001

A contribution to (neuromorphic) blind deconvolution by flexible approximated Bayesian estimation

Simone G. O. Fiori

Abstract ‘Bussgang’ deconvolution techniques for blind digital channels equalization rely on a Bayesian estimator of the source sequence defined on the basis of channel/equalizer cascade model which involves the definition of deconvolution noise. In this paper we consider four ‘Bussgang’ blind deconvolution algorithms for uniformly distributed source signals and investigate their numerical performances as well as some of their analytical features. Particularly, we show that the algorithm, introduced by the present author, provided by a flexible (neuromorphic) estimator is effective as it does not require to make any hypothesis about convolutional noise level and exhibits satisfactory numerical performances.


Neural Networks | 2003

Overview of independent component analysis technique with an application to synthetic aperture radar (SAR) imagery processing

Simone G. O. Fiori

We present an overview of independent component analysis, an emerging signal processing technique based on neural networks, with the aim to provide an up-to-date survey of the theoretical streams in this discipline and of the current applications in the engineering area. We also focus on a particular application, dealing with a remote sensing technique based on synthetic aperture radar imagery processing: we briefly review the features and main applications of synthetic aperture radar and show how blind signal processing by neural networks may be advantageously employed to enhance the quality of remote sensing data.


Neurocomputing | 2000

Blind separation of circularly distributed sources by neural extended APEX algorithm

Simone G. O. Fiori

Abstract The aim of this work is to present a generalized Hebbian learning theory for complex-weighted linear feed-forward network endowed with lateral inhibitory connections, and to show how it can be applied to blind separation from complex-valued mixtures. We start by stating an optimization principle for Kung–Diamantaras’ network which leads to a generalized APEX-like learning theory relying on some non-linear functions, whose choice determines networks ability. Then we recall the Sudjianto–Hassoun interpretation of Hebbian learning and show that it drives us to the choice of the right set of non-linear functions allowing the network to achieve blind separation. The proposed approach is finally assessed by numerical simulations.


Neural Networks | 2002

Hybrid independent component analysis by adaptive LUT activation function neurons

Simone G. O. Fiori

The aim of this paper is to present an efficient implementation of unsupervised adaptive-activation function neurons dedicated to one-dimensional probability density estimation, with application to independent component analysis. The proposed implementation is a computationally light improvement to adaptive pseudo-polynomial neurons, recently presented in Fiori, S. (2000a). Blind signal processing by the adaptive activation function neurons. Neural Networks, 13(6), 597-611, and is based upon the concept of look-up table (LUT) neurons.


Neural Computation | 2005

Nonlinear Complex-Valued Extensions of Hebbian Learning: An Essay

Simone G. O. Fiori

The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.


International Journal of Neural Systems | 2001

Probability density function learning by unsupervised neurons.

Simone G. O. Fiori

In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.


Neural Processing Letters | 2001

Probability Density Estimation Using Adaptive Activation Function Neurons

Simone G. O. Fiori; Paolo Bucciarelli

In this paper we deal with the problem of approximating the probability density function of a signal by means of adaptive activation function neurons. We compare the proposed approach to the one based on a mixture of kernels and show through computer simulations that comparable results may be obtained with limited expense in computational efforts.

Collaboration


Dive into the Simone G. O. Fiori's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elena Celledoni

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

A. Faba

University of Perugia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Albini

University of Perugia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Maggi

University of Perugia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge