Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Gastaldo is active.

Publication


Featured researches published by Paolo Gastaldo.


Neurocomputing | 2015

An ELM-based model for affective analogical reasoning

Erik Cambria; Paolo Gastaldo; Federica Bisio; Rodolfo Zunino

Between the dawn of the Internet through year 2003, there were just a few dozens exabytes of information on the Web. Today, that much information is created weekly. The opportunity to capture the opinions of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised increasing interest both in the scientific community, for the exciting open challenges, and in the business world, for the remarkable fallouts in marketing and financial prediction. Keeping up with the ever-growing amount of unstructured information on the Web, however, is a formidable task and requires fast and efficient models for opinion mining. In this paper, we explore how the high generalization performance, low computational complexity, and fast learning speed of extreme learning machines can be exploited to perform analogical reasoning in a vector space model of affective common-sense knowledge. In particular, by enabling a fast reconfiguration of such a vector space, extreme learning machines allow the polarity associated with natural language concepts to be calculated in a more dynamic and accurate way and, hence, perform better concept-level sentiment analysis.


IEEE Transactions on Robotics | 2011

Tactile-Data Classification of Contact Materials Using Computational Intelligence

Sergio Decherchi; Paolo Gastaldo; Ravinder Dahiya; Maurizio Valle; Rodolfo Zunino

The two major components of a robotic tactile-sensing system are the tactile-sensing hardware at the lower level and the computational/software tools at the higher level. Focusing on the latter, this research assesses the suitability of computational-intelligence (CI) tools for tactile-data processing. In this context, this paper addresses the classification of sensed object material from the recorded raw tactile data. For this purpose, three CI paradigms, namely, the support-vector machine (SVM), regularized least square (RLS), and regularized extreme learning machine (RELM), have been employed, and their performance is compared for the said task. The comparative analysis shows that SVM provides the best tradeoff between classification accuracy and computational complexity of the classification algorithm. Experimental results indicate that the CI tools are effective in dealing with the challenging problem of material classification.


Neurocomputing | 2013

Circular-ELM for the reduced-reference assessment of perceived image quality

Sergio Decherchi; Paolo Gastaldo; Rodolfo Zunino; Erik Cambria; Judith Redi

Providing a satisfactory visual experience is one of the main goals for present-day electronic multimedia devices. All the enabling technologies for storage, transmission, compression, rendering should preserve, and possibly enhance, the quality of the video signal; to do so, quality control mechanisms are required. These mechanisms rely on systems that can assess the visual quality of the incoming signal consistently with human perception. Computational Intelligence (CI) paradigms represent a suitable technology to tackle this challenging problem. The present research introduces an augmented version of the basic Extreme Learning Machine (ELM), the Circular-ELM (C-ELM), which proves effective in addressing the visual quality assessment problem. The C-ELM model derives from the original Circular BackPropagation (CBP) architecture, in which the input vector of a conventional MultiLayer Perceptron (MLP) is augmented by one additional dimension, the circular input; this paper shows that C-ELM can actually benefit from the enhancement provided by the circular input without losing any of the fruitful properties that characterize the basic ELM framework. In the proposed framework, C-ELM handles the actual mapping of visual signals into quality scores, successfully reproducing perceptual mechanisms. Its effectiveness is proved on recognized benchmarks and for four different types of distortions.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2012

Efficient Digital Implementation of Extreme Learning Machines for Classification

Sergio Decherchi; Paolo Gastaldo; Alessio Leoncini; Rodolfo Zunino

The availability of compact fast circuitry for the support of artificial neural systems is a long-standing and critical requirement for many important applications. This brief addresses the implementation of the powerful extreme learning machine (ELM) model on reconfigurable digital hardware (HW). The design strategy first provides a training procedure for ELMs, which effectively trades off prediction accuracy and network complexity. This, in turn, facilitates the optimization of HW resources. Finally, this brief describes and analyzes two implementation approaches: one involving field-programmable gate array devices and one embedding low-cost low-performance devices such as complex programmable logic devices. Experimental results show that, in both cases, the design approach yields efficient digital architectures with satisfactory performances and limited costs.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Color Distribution Information for the Reduced-Reference Assessment of Perceived Image Quality

Judith Redi; Paolo Gastaldo; Ingrid Heynderickx; Rodolfo Zunino

Reduced-reference systems can predict in real-time the perceived quality of images for digital broadcasting, only requiring that a limited set of features, extracted from the original undistorted signals, is transmitted together with the image data. This paper uses descriptors based on the color correlogram, analyzing the alterations in the color distribution of an image as a consequence of the occurrence of distortions, for the reduced reference data. The processing architecture relies on a double layer at the receiver end. The first layer identifies the kind of distortion that may affect the received signal. The second layer deploys a dedicated prediction module for each type of distortion; every predictor yields an objective quality score, thus completing the estimation process. Computational-intelligence models are used extensively to support both layers with empirical training. The double-layer architecture implements a general purpose image quality assessment system, not being tied up to specific distortions and, at the same time, it allows us to benefit from the accuracy of specific, distortion-targeted metrics. Experimental results based on subjective quality data confirm the general validity of the approach.


Signal Processing-image Communication | 2005

Objective quality assessment of displayed images by using neural networks

Paolo Gastaldo; Rodolfo Zunino; Ingrid Heynderickx; Elena Vicario

Considerable research effort is being devoted to the development of image-enhancement algorithms, which improve the quality of displayed digital pictures. Reliable methods for measuring perceived image quality are needed to evaluate the performances of those algorithms, and such measurements require a univariant (i.e., no-reference) approach. The system presented in this paper applies concepts derived from computational intelligence, and supports an objective quality-assessment method based on a circular back-propagation (CBP) neural model. The network is trained to predict quality ratings, as scored by human assessors, from numerical features that characterize images. As such, the method aims at reproducing perceived image quality, rather than defining a comprehensive model of the human visual system. The connectionist approach allows one to decouple the task of feature selection from the consequent mapping of features into an objective quality score. Experimental results on the perceptual effects of a family of contrast-enhancement algorithms confirm the method effectiveness, as the system renders quite accurately the image quality perceived by human assessors.


IEEE Transactions on Neural Networks | 2002

Objective quality assessment of MPEG-2 video streams by using CBP neural networks

Paolo Gastaldo; Stefano Rovetta; Rodolfo Zunino

The increasing use of compression standards in broadcasting digital TV has raised the need for established criteria to measure perceived quality. Novel methods must take into account the specific artifacts introduced by digital compression techniques. This paper presents a methodology using circular backpropagation (CBP) neural networks for the objective quality assessment of motion picture expert group (MPEG) video streams. Objective features are continuously extracted from compressed video streams on a frame-by-frame basis; they feed the CBP network estimating the corresponding perceived quality. The resulting adaptive modeling of subjective perception supports a real-time system for monitoring displayed video quality. The overall system mimics perception but does not require an analytical model of the underlying physical phenomenon. The ability to process compressed video streams represents a crucial advantage over existing approaches, as avoiding the decoding process greatly enhances the systems real-time performance. Experimental evidence confirmed the approach validity. The system was tested on real test videos; they included different contents ranging from fiction to sport. The neural model provided a satisfactory, continuous-time approximation for actual scoring curves, which was validated statistically in terms of confidence analysis. As expected, videos with slow-varying contents such as fiction featured the best performances.


Journal of Electronic Imaging | 2005

Neural networks for the no-reference assessment of perceived quality

Paolo Gastaldo; Rodolfo Zunino

Imaging algorithms often require reliable methods to evaluate the quality effects of the visual artifacts that digital processing brings about. We adopt a no-reference objective method for predicting the perceived quality of images in a deterministic fashion. JPEG coding provides a significant and interesting case study. To enhance the coherence of the quality estimates with respect to the empirical evidence of the perceptual phenomenon, the system parameters are adjusted using subjective scores obtained from human assessors. Principal component analysis is first used to assemble a set of objective features that best characterize the information in image data. Then a neural network, based on the circular backpropagation (CBP) model, associates the selected features with the corresponding predictions of quality ratings and reproduces the scores process of human assessors. The neural model enables one to decouple the process of feature selection from the task of mapping features into a quality score. Results on a public database for an image-quality experiment involving JPEG-compressed images and comparisons with existing objective methods confirm the approach effectiveness.


Eurasip Journal on Image and Video Processing | 2013

Supporting visual quality assessment with machine learning

Paolo Gastaldo; Rodolfo Zunino; Judith Redi

Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality assessment task from a different perspective, as the eventual goal is to mimic quality perception instead of designing an explicit model the human visual system. Several studies already proved the ability of ML-based approaches to address visual quality assessment; nevertheless, these paradigms are highly prone to overfitting, and their overall reliability may be questionable. In fact, a prerequisite for successfully using ML in modeling perceptual mechanisms is a profound understanding of the advantages and limitations that characterize learning machines. This paper illustrates and exemplifies the good practices to be followed.


Archive | 2009

Computational Intelligence in Security for Information Systems

Álvaro Herrero; Paolo Gastaldo; Rodolfo Zunino; Emilio Corchado

The Second International Workshop on Computational Intelligence for Security in Information Systems (CISIS09) presented the most recent developments in the dynamically expanding realm of several fields such as Data Mining and Intelligence, Infrastructure Protection, Network Security, Biometry and Industrial Perspectives. The International Workshop on Computational Intelligence for Security in Information Systems (CISIS) proposes a forum to the different communities related to the field of intelligent systems for security. The global purpose of CISIS conferences has been to form a broad and interdisciplinary meeting ground offering the opportunity to interact with the leading industries actively involved in the critical area of security, and have a picture of the current solutions adopted in practical domains. This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at CISIS09, which was held in Burgos, Spain, on September 23rd-26th, 2009.

Collaboration


Dive into the Paolo Gastaldo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergio Decherchi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith Redi

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge