Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aurelio Uncini is active.

Publication


Featured researches published by Aurelio Uncini.


international symposium on neural networks | 2014

An effective criterion for pruning reservoir's connections in Echo State Networks

Simone Scardapane; Gabriele Nocco; Danilo Comminiello; Michele Scarpiniti; Aurelio Uncini

Echo State Networks (ESNs) were introduced to simplify the design and training of Recurrent Neural Networks (RNNs), by explicitly subdividing the recurrent part of the network, the reservoir, from the non-recurrent part. A standard practice in this context is the random initialization of the reservoir, subject to few loose constraints. Although this results in a simple-to-solve optimization problem, it is in general suboptimal, and several additional criteria have been devised to improve its design. In this paper we provide an effective algorithm for removing redundant connections inside the reservoir during training. The algorithm is based on the correlation of the states of the nodes, hence it depends only on the input signal, it is efficient to implement, and it is also local. By applying it, we can obtain an optimally sparse reservoir in a robust way. We present the performance of our algorithm on two synthetic datasets, which show its effectiveness in terms of better generalization and lower computational complexity of the resulting ESN. This behavior is also investigated for increasing levels of memory and non-linearity required by the task.


International Workshop on Neural Networks | 2015

Benchmarking Functional Link Expansions for Audio Classification Tasks

Simone Scardapane; Danilo Comminiello; Michele Scarpiniti; Raffaele Parisi; Aurelio Uncini

Functional Link Artificial Neural Networks (FLANNs) have been extensively used for tasks of audio and speech classification, due to their combination of universal approximation capabilities and fast training. The performance of a FLANN, however, is known to be dependent on the specific functional link (FL) expansion that is used. In this paper, we provide an extensive benchmark of multiple FL expansions on several audio classification problems, including speech discrimination, genre classification, and artist recognition. Our experimental results show that a random-vector expansion is well suited for classification tasks, achieving the best accuracy in two out of three tasks.


congress on evolutionary computation | 2014

GP-based kernel evolution for L2-Regularization Networks

Simone Scardapane; Danilo Comminiello; Michele Scarpiniti; Aurelio Uncini

In kernel-based learning methods, a crucial design parameter is given by the choice of the kernel function to be used. Although there is, in theory, an infinite range of potential candidates, a handful of kernels covers the majority of actual applications. Partly, this is due to the difficulty of choosing an optimal kernel function in absence of a-priori information. In this respect, Genetic Programming (GP) techniques have shown interesting capabilities of learning non-trivial kernel functions that outperform commonly used ones. However, experiments have been restricted to the use of Support Vector Machines (SVMs), and have not addressed some problems that are specific to GP implementations, such as diversity maintenance. In these respects, the aim of this paper is twofold. First, we present a customized GP-based kernel search method that we apply using an L2-Regularization Network as the base learning algorithm. Second, we investigate the problem of diversity maintenance in the context of kernel evolution, and test an adaptive criterion for maintaining it in our algorithm. For the former point, experiments show a gain in accuracy for our method against fine-tuned standard kernels. For the latter, we show that diversity is decreasing critically fast during the GP iterations, but this decrease does not seems to affect performance of the algorithm.


23rd Workshop of the Italian Neural Networks Society, WIRN 2013 | 2014

A Preliminary Study on Transductive Extreme Learning Machines

Simone Scardapane; Danilo Comminiello; Michele Scarpiniti; Aurelio Uncini

Transductive learning is the problem of designing learning machines that succesfully generalize only on a given set of input patterns. In this paper we begin the study towards the extension of Extreme Learning Machine (ELM) theory to the transductive setting, focusing on the binary classification case. To this end, we analyze previous work on Transductive Support Vector Machines (TSVM) learning, and introduce the Transductive ELM (TELM) model. Contrary to TSVM, we show that the optimization of TELM results in a purely combinatorial search over the unknown labels. Some preliminary results on an artifical dataset show substained improvements with respect to a standard ELM model.


International Workshop on Neural Networks | 2016

A Comparison of Consensus Strategies for Distributed Learning of Random Vector Functional-Link Networks

Roberto Fierimonte; Simone Scardapane; Massimo Panella; Aurelio Uncini

Distributed machine learning is the problem of inferring a desired relation when the training data is distributed throughout a network of agents (e.g. robots in a robot swarm). Multiple families of distributed learning algorithms are based on the decentralized average consensus (DAC) protocol, an efficient algorithm for computing an average starting from local measurement vectors. The performance of DAC, however, is strongly dependent on the choice of a weighting matrix associated to the network. In this paper, we perform a comparative analysis of the relative performance of 4 different strategies for choosing the weighting matrix. As an applicative example, we consider the distributed sequential algorithm for Random Vector Functional-Link networks. As expected, our experimental simulations show that the training time required by the algorithm is drastically reduced when considering a proper initialization of the weights.


italian workshop on neural nets | 2013

PM10 Forecasting Using Kernel Adaptive Filtering: An Italian Case Study

Simone Scardapane; Danilo Comminiello; Michele Scarpiniti; Raffaele Parisi; Aurelio Uncini

Short term prediction of air pollution is gaining increasing attention in the research community, due to its social and economical impact. In this paper we study the application of a Kernel Adaptive Filtering (KAF) algorithm to the problem of predicting PM10 data in the Italian province of Ancona, and we show how this predictor is able to achieve a significant low error with the inclusion of chemical data correlated with the PM10 such as NO2.


international symposium on neural networks | 2017

On the use of deep recurrent neural networks for detecting audio spoofing attacks

Simone Scardapane; Lucas Stoffl; Florian Röhrbein; Aurelio Uncini

Biometric security systems based on predefined speech sentences are extremely common nowadays, particularly in low-cost applications where the simplicity of the hardware involved is a great advantage. Audio spoofing verification is the problem of detecting whether a speech segment acquired from such a system is genuine, or whether it was synthesized or modified by a computer in order to make it sound like an authorized person. Developing countermeasures for spoofing attacks is clearly essential for having effective biometric and security systems based on audio features, all the more significant due to recent advances in generative machine learning. Nonetheless, the problem is complicated by the possible lack of knowledge on the technique(s) used to put forward the attack, so that anti-spoofing systems should be able to withstand also spoofing attacks that were not considered explicitly in the training stage. In this paper, we analyze the use of deep recurrent networks applied to this task, i.e. networks made by the successive combination of multiple feedforward and recurrent layers. These networks are routinely used in speech recognition and language identification but, to the best of our knowledge, they were never considered for this specific problem. We evaluate several architectures on the dataset released for the ASVspoof 2015 challenge last year. We show that, by working with very standard feature extraction routines and with a minimum amount of fine-tuning, the networks can already reach very promising error rates, comparable to state-of-the-art approaches, paving the way to further investigations on the problem using deep RNN models.


Journal of The Audio Engineering Society | 2013

User-Driven Quality Enhancement for Audio Signal Processing

Danilo Comminiello; Simone Scardapane; Michele Scarpiniti; Aurelio Uncini


european signal processing conference | 2016

Diffusion spline adaptive filtering

Simone Scardapane; Michele Scarpiniti; Danilo Comminiello; Aurelio Uncini


italian workshop on neural nets | 2013

A Preliminary Study on Transductive Extreme Learning Machines.

Simone Scardapane; Danilo Comminiello; Michele Scarpiniti; Aurelio Uncini

Collaboration


Dive into the Aurelio Uncini's collaboration.

Top Co-Authors

Avatar

Simone Scardapane

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Danilo Comminiello

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Michele Scarpiniti

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Raffaele Parisi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Gabriele Nocco

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Massimo Panella

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Roberto Fierimonte

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge